Feb 26 11:10:52 crc systemd[1]: Starting Kubernetes Kubelet... Feb 26 11:10:52 crc restorecon[4695]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:52 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:53 crc restorecon[4695]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:10:53 crc restorecon[4695]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 26 11:10:55 crc kubenswrapper[4699]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 11:10:55 crc kubenswrapper[4699]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 26 11:10:55 crc kubenswrapper[4699]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 11:10:55 crc kubenswrapper[4699]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 11:10:55 crc kubenswrapper[4699]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 26 11:10:55 crc kubenswrapper[4699]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.459654 4699 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464470 4699 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464498 4699 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464508 4699 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464516 4699 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464521 4699 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464529 4699 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464535 4699 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464541 4699 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464546 4699 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464561 4699 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464566 4699 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464571 4699 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464576 4699 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464580 4699 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464585 4699 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464591 4699 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464595 4699 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464600 4699 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464604 4699 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464609 4699 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464613 4699 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464618 4699 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464623 4699 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464629 4699 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464633 4699 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464637 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464644 4699 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464650 4699 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464655 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464661 4699 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464666 4699 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464670 4699 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464677 4699 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464683 4699 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464690 4699 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464696 4699 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464701 4699 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464707 4699 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464713 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464720 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464726 4699 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464732 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464739 4699 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464744 4699 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464751 4699 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464757 4699 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464761 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464768 4699 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464773 4699 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464778 4699 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464783 4699 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464788 4699 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464793 4699 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464798 4699 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464804 4699 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464811 4699 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464819 4699 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464826 4699 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464831 4699 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464836 4699 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464842 4699 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464846 4699 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464851 4699 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464856 4699 feature_gate.go:330] unrecognized feature gate: Example Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464860 4699 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464864 4699 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464870 4699 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464875 4699 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464880 4699 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464884 4699 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.464889 4699 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465002 4699 flags.go:64] FLAG: --address="0.0.0.0" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465014 4699 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465023 4699 flags.go:64] FLAG: --anonymous-auth="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465032 4699 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465040 4699 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465046 4699 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465054 4699 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465061 4699 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465067 4699 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465074 4699 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465080 4699 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465085 4699 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465090 4699 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465094 4699 flags.go:64] FLAG: --cgroup-root="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465099 4699 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465106 4699 flags.go:64] FLAG: --client-ca-file="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465139 4699 flags.go:64] FLAG: --cloud-config="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465145 4699 flags.go:64] FLAG: --cloud-provider="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465151 4699 flags.go:64] FLAG: --cluster-dns="[]" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465160 4699 flags.go:64] FLAG: --cluster-domain="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465166 4699 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465171 4699 flags.go:64] FLAG: --config-dir="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465177 4699 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465183 4699 flags.go:64] FLAG: --container-log-max-files="5" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465191 4699 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465197 4699 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465202 4699 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465208 4699 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465213 4699 flags.go:64] FLAG: --contention-profiling="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465217 4699 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465223 4699 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465228 4699 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465233 4699 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465240 4699 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465245 4699 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465251 4699 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465256 4699 flags.go:64] FLAG: --enable-load-reader="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465261 4699 flags.go:64] FLAG: --enable-server="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465266 4699 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465273 4699 flags.go:64] FLAG: --event-burst="100" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465279 4699 flags.go:64] FLAG: --event-qps="50" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465285 4699 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465290 4699 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465296 4699 flags.go:64] FLAG: --eviction-hard="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465303 4699 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465310 4699 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465315 4699 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465321 4699 flags.go:64] FLAG: --eviction-soft="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465325 4699 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465330 4699 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465337 4699 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465343 4699 flags.go:64] FLAG: --experimental-mounter-path="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465348 4699 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465353 4699 flags.go:64] FLAG: --fail-swap-on="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465359 4699 flags.go:64] FLAG: --feature-gates="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465366 4699 flags.go:64] FLAG: --file-check-frequency="20s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465372 4699 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465378 4699 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465383 4699 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465389 4699 flags.go:64] FLAG: --healthz-port="10248" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465394 4699 flags.go:64] FLAG: --help="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465400 4699 flags.go:64] FLAG: --hostname-override="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465405 4699 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465411 4699 flags.go:64] FLAG: --http-check-frequency="20s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465416 4699 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465421 4699 flags.go:64] FLAG: --image-credential-provider-config="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465426 4699 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465432 4699 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465437 4699 flags.go:64] FLAG: --image-service-endpoint="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465442 4699 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465447 4699 flags.go:64] FLAG: --kube-api-burst="100" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465452 4699 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465458 4699 flags.go:64] FLAG: --kube-api-qps="50" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465463 4699 flags.go:64] FLAG: --kube-reserved="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465469 4699 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465474 4699 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465479 4699 flags.go:64] FLAG: --kubelet-cgroups="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465484 4699 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465490 4699 flags.go:64] FLAG: --lock-file="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465497 4699 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465503 4699 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465509 4699 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465518 4699 flags.go:64] FLAG: --log-json-split-stream="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465525 4699 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465531 4699 flags.go:64] FLAG: --log-text-split-stream="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465536 4699 flags.go:64] FLAG: --logging-format="text" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465541 4699 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465547 4699 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465552 4699 flags.go:64] FLAG: --manifest-url="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465557 4699 flags.go:64] FLAG: --manifest-url-header="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465564 4699 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465570 4699 flags.go:64] FLAG: --max-open-files="1000000" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465576 4699 flags.go:64] FLAG: --max-pods="110" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465582 4699 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465587 4699 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465592 4699 flags.go:64] FLAG: --memory-manager-policy="None" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465598 4699 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465603 4699 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465609 4699 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465614 4699 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465629 4699 flags.go:64] FLAG: --node-status-max-images="50" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465635 4699 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465640 4699 flags.go:64] FLAG: --oom-score-adj="-999" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465645 4699 flags.go:64] FLAG: --pod-cidr="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465650 4699 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465661 4699 flags.go:64] FLAG: --pod-manifest-path="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465666 4699 flags.go:64] FLAG: --pod-max-pids="-1" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465671 4699 flags.go:64] FLAG: --pods-per-core="0" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465677 4699 flags.go:64] FLAG: --port="10250" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465682 4699 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465687 4699 flags.go:64] FLAG: --provider-id="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465693 4699 flags.go:64] FLAG: --qos-reserved="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465699 4699 flags.go:64] FLAG: --read-only-port="10255" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465705 4699 flags.go:64] FLAG: --register-node="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465710 4699 flags.go:64] FLAG: --register-schedulable="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465720 4699 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465731 4699 flags.go:64] FLAG: --registry-burst="10" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465739 4699 flags.go:64] FLAG: --registry-qps="5" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465744 4699 flags.go:64] FLAG: --reserved-cpus="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465751 4699 flags.go:64] FLAG: --reserved-memory="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465758 4699 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465763 4699 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465769 4699 flags.go:64] FLAG: --rotate-certificates="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465774 4699 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465779 4699 flags.go:64] FLAG: --runonce="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465784 4699 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465789 4699 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465794 4699 flags.go:64] FLAG: --seccomp-default="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465800 4699 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465806 4699 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465811 4699 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465816 4699 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465822 4699 flags.go:64] FLAG: --storage-driver-password="root" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465827 4699 flags.go:64] FLAG: --storage-driver-secure="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465832 4699 flags.go:64] FLAG: --storage-driver-table="stats" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465838 4699 flags.go:64] FLAG: --storage-driver-user="root" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465843 4699 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465849 4699 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465855 4699 flags.go:64] FLAG: --system-cgroups="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465860 4699 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465870 4699 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465875 4699 flags.go:64] FLAG: --tls-cert-file="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465881 4699 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465890 4699 flags.go:64] FLAG: --tls-min-version="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465895 4699 flags.go:64] FLAG: --tls-private-key-file="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465901 4699 flags.go:64] FLAG: --topology-manager-policy="none" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465907 4699 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465915 4699 flags.go:64] FLAG: --topology-manager-scope="container" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465921 4699 flags.go:64] FLAG: --v="2" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465930 4699 flags.go:64] FLAG: --version="false" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465939 4699 flags.go:64] FLAG: --vmodule="" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465946 4699 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.465952 4699 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466078 4699 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466087 4699 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466093 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466098 4699 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466104 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466128 4699 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466134 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466139 4699 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466145 4699 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466150 4699 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466154 4699 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466159 4699 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466165 4699 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466172 4699 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466178 4699 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466185 4699 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466191 4699 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466198 4699 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466204 4699 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466210 4699 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466215 4699 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466221 4699 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466227 4699 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466232 4699 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466237 4699 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466243 4699 feature_gate.go:330] unrecognized feature gate: Example Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466248 4699 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466252 4699 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466257 4699 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466261 4699 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466268 4699 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466274 4699 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466279 4699 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466284 4699 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466289 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466294 4699 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466300 4699 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466305 4699 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466310 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466315 4699 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466319 4699 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466324 4699 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466328 4699 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466333 4699 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466337 4699 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466342 4699 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466347 4699 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466353 4699 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466358 4699 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466363 4699 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466367 4699 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466373 4699 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466378 4699 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466383 4699 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466388 4699 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466394 4699 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466399 4699 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466405 4699 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466410 4699 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466416 4699 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466421 4699 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466426 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466431 4699 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466436 4699 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466441 4699 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466446 4699 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466451 4699 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466455 4699 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466460 4699 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466465 4699 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.466469 4699 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.466484 4699 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.500222 4699 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.500261 4699 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500348 4699 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500360 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500366 4699 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500372 4699 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500376 4699 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500381 4699 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500385 4699 feature_gate.go:330] unrecognized feature gate: Example Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500390 4699 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500396 4699 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500400 4699 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500404 4699 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500409 4699 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500413 4699 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500417 4699 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500421 4699 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500425 4699 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500429 4699 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500432 4699 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500436 4699 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500440 4699 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500443 4699 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500449 4699 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500455 4699 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500459 4699 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500464 4699 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500468 4699 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500474 4699 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500479 4699 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500484 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500488 4699 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500491 4699 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500495 4699 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500499 4699 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500504 4699 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500509 4699 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500514 4699 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500518 4699 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500522 4699 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500525 4699 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500529 4699 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500533 4699 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500537 4699 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500540 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500544 4699 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500548 4699 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500552 4699 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500555 4699 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500559 4699 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500562 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500566 4699 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500570 4699 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500574 4699 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500578 4699 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500583 4699 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500587 4699 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500592 4699 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500596 4699 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500601 4699 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500606 4699 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500610 4699 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500615 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500619 4699 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500622 4699 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500626 4699 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500629 4699 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500634 4699 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500638 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500644 4699 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500648 4699 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500652 4699 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500656 4699 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.500664 4699 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500785 4699 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500791 4699 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500796 4699 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500801 4699 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500804 4699 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500808 4699 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500812 4699 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500816 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500821 4699 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500826 4699 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500830 4699 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500834 4699 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500838 4699 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500842 4699 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500846 4699 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500849 4699 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500853 4699 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500857 4699 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500860 4699 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500865 4699 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500870 4699 feature_gate.go:330] unrecognized feature gate: Example Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500874 4699 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500879 4699 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500883 4699 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500887 4699 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500891 4699 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500895 4699 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500899 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500903 4699 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500907 4699 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500911 4699 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500914 4699 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500918 4699 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500921 4699 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500925 4699 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500929 4699 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500932 4699 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500937 4699 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500940 4699 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500944 4699 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500948 4699 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500951 4699 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500954 4699 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500958 4699 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500961 4699 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500966 4699 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500970 4699 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500974 4699 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500977 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500981 4699 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500985 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500989 4699 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500992 4699 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500996 4699 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.500999 4699 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501003 4699 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501006 4699 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501010 4699 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501014 4699 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501018 4699 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501022 4699 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501027 4699 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501031 4699 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501036 4699 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501039 4699 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501043 4699 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501047 4699 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501051 4699 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501054 4699 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501058 4699 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 11:10:55 crc kubenswrapper[4699]: W0226 11:10:55.501061 4699 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.501068 4699 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.501274 4699 server.go:940] "Client rotation is on, will bootstrap in background" Feb 26 11:10:55 crc kubenswrapper[4699]: E0226 11:10:55.511010 4699 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.514870 4699 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.515020 4699 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.517379 4699 server.go:997] "Starting client certificate rotation" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.517416 4699 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.517605 4699 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.765477 4699 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 11:10:55 crc kubenswrapper[4699]: E0226 11:10:55.767829 4699 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.769923 4699 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.785682 4699 log.go:25] "Validated CRI v1 runtime API" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.913746 4699 log.go:25] "Validated CRI v1 image API" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.921041 4699 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.927086 4699 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-26-11-05-31-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.927152 4699 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.970737 4699 manager.go:217] Machine: {Timestamp:2026-02-26 11:10:55.939023687 +0000 UTC m=+1.749850141 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e4404db5-04f3-42e6-90eb-21c35124a700 BootID:7f74e239-3726-44b7-b791-47b33a2699be Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:36:b7:2b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:36:b7:2b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1d:52:37 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:23:7b:dd Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6e:d8:8e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ff:5b:f5 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f6:64:4e:2c:6b:61 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fa:f3:59:ee:a9:73 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.971174 4699 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.971457 4699 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.993607 4699 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.994011 4699 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.994060 4699 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.994319 4699 topology_manager.go:138] "Creating topology manager with none policy" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.994331 4699 container_manager_linux.go:303] "Creating device plugin manager" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.999071 4699 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.999133 4699 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.999482 4699 state_mem.go:36] "Initialized new in-memory state store" Feb 26 11:10:55 crc kubenswrapper[4699]: I0226 11:10:55.999618 4699 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.037172 4699 kubelet.go:418] "Attempting to sync node with API server" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.037226 4699 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.037253 4699 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.037271 4699 kubelet.go:324] "Adding apiserver pod source" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.037288 4699 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 26 11:10:56 crc kubenswrapper[4699]: W0226 11:10:56.041790 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.041870 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:56 crc kubenswrapper[4699]: W0226 11:10:56.042688 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.042980 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.042876 4699 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.044838 4699 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.047272 4699 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049220 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049328 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049404 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049459 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049513 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049562 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049625 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049677 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049733 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049784 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049837 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.049885 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.055206 4699 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.056170 4699 server.go:1280] "Started kubelet" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.056697 4699 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.056932 4699 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.057589 4699 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.058293 4699 server.go:460] "Adding debug handlers to kubelet server" Feb 26 11:10:56 crc systemd[1]: Started Kubernetes Kubelet. Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.117314 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.118260 4699 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.118336 4699 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.118597 4699 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.118638 4699 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.118609 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.118720 4699 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 26 11:10:56 crc kubenswrapper[4699]: W0226 11:10:56.119305 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.119414 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="200ms" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.119438 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.120036 4699 factory.go:55] Registering systemd factory Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.120068 4699 factory.go:221] Registration of the systemd container factory successfully Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.120440 4699 factory.go:153] Registering CRI-O factory Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.120493 4699 factory.go:221] Registration of the crio container factory successfully Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.120627 4699 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.120657 4699 factory.go:103] Registering Raw factory Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.120675 4699 manager.go:1196] Started watching for new ooms in manager Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.119718 4699 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.213:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897c76e8f2e2097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.056074391 +0000 UTC m=+1.866900865,LastTimestamp:2026-02-26 11:10:56.056074391 +0000 UTC m=+1.866900865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.121321 4699 manager.go:319] Starting recovery of all containers Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126445 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126525 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126542 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126558 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126568 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126578 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126588 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126602 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126616 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126631 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126643 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126657 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126668 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126681 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126690 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126705 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126722 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126731 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126745 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126759 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126773 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126785 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126799 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126813 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126825 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126841 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126860 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126874 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126888 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126900 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126913 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126925 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.126944 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127007 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127022 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127041 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127066 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127082 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127095 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127129 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127145 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127161 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127177 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127191 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127204 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127220 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127234 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127251 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127268 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127282 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127295 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127313 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127342 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127359 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127373 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127386 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127409 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127423 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127433 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127444 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127455 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127466 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127479 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127492 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127507 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127519 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127531 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127542 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127552 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127575 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127586 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127599 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127610 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127641 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127661 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127675 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127685 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127698 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127709 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127721 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127766 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127785 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127800 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127813 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127826 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127838 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127852 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127864 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127874 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127886 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127899 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127911 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127922 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127932 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127944 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127954 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127967 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127983 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.127996 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128015 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128027 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128038 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128050 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128061 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128080 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128092 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128110 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128144 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128157 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128170 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128184 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128197 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128212 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128225 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128237 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128254 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128267 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128286 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128303 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128394 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128478 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128499 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128558 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128571 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128636 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128650 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128662 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128680 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128708 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128742 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128759 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128771 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128783 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128795 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128807 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128820 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128849 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128858 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128868 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128897 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128907 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128918 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.128929 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129004 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129036 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129139 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129151 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129242 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129306 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129320 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129373 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129383 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129492 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129506 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129520 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129531 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129541 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129551 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129566 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129582 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129616 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129639 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129653 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129669 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129679 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129689 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129698 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129707 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129735 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.129744 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.130891 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.130925 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.130934 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.130944 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.130954 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.130999 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.131024 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.131045 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.131085 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.131635 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.131658 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.131669 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.131767 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.131782 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135015 4699 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135102 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135225 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135257 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135278 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135299 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135318 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135339 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135381 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135438 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135457 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135476 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135491 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135511 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135530 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135569 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135602 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135626 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135664 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135679 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135861 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135887 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135903 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.135953 4699 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.136047 4699 reconstruct.go:97] "Volume reconstruction finished" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.136060 4699 reconciler.go:26] "Reconciler: start to sync state" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.146972 4699 manager.go:324] Recovery completed Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.159850 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.161554 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.161614 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.161625 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.163456 4699 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.163475 4699 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.163504 4699 state_mem.go:36] "Initialized new in-memory state store" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.218820 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.252282 4699 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.254730 4699 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.259040 4699 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.259380 4699 kubelet.go:2335] "Starting kubelet main sync loop" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.259519 4699 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 26 11:10:56 crc kubenswrapper[4699]: W0226 11:10:56.260560 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.260609 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.319144 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.320287 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="400ms" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.359966 4699 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.409982 4699 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.213:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897c76e8f2e2097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.056074391 +0000 UTC m=+1.866900865,LastTimestamp:2026-02-26 11:10:56.056074391 +0000 UTC m=+1.866900865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.420095 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.443422 4699 policy_none.go:49] "None policy: Start" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.445678 4699 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.445762 4699 state_mem.go:35] "Initializing new in-memory state store" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.520920 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.525328 4699 manager.go:334] "Starting Device Plugin manager" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.525412 4699 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.525428 4699 server.go:79] "Starting device plugin registration server" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.525881 4699 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.525909 4699 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.526257 4699 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.526471 4699 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.526584 4699 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.534766 4699 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.560718 4699 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.560966 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.563104 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.563187 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.563202 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.563433 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.564378 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.564476 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.564867 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.564944 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.564988 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.565304 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.565753 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.565812 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.566222 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.566288 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.566318 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.567520 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.567552 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.567567 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.567623 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.567664 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.567696 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.567720 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.567910 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.567942 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.568799 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.568845 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.568864 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.569021 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.569097 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.569133 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.569257 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.569316 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.569330 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.570064 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.570097 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.570127 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.571453 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.571484 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.571497 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.571711 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.571746 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.574973 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.575037 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.575053 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.626187 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.627230 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.627270 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.627287 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.627320 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.627954 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642374 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642415 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642457 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642481 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642506 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642566 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642679 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642749 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642801 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642839 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642883 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.642947 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.643031 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.643088 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.643140 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.721618 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="800ms" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.744918 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745006 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745032 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745054 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745076 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745099 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745153 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745179 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745190 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745208 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745234 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745226 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745278 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745370 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745546 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.746017 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745541 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745265 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745314 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.746222 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745517 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.746200 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.746276 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.746300 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.746323 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.746353 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745862 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.745389 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.746417 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.746531 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.828897 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.830563 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.830598 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.830606 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.830630 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:10:56 crc kubenswrapper[4699]: E0226 11:10:56.831092 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.905173 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.924321 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.940450 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.958525 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:10:56 crc kubenswrapper[4699]: I0226 11:10:56.964360 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 11:10:57 crc kubenswrapper[4699]: W0226 11:10:57.037766 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:57 crc kubenswrapper[4699]: E0226 11:10:57.037861 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:57 crc kubenswrapper[4699]: W0226 11:10:57.061081 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d35da75391c4c8e7c96b84a226f1de3160a84a3aa73572d927d673ba2153c5cd WatchSource:0}: Error finding container d35da75391c4c8e7c96b84a226f1de3160a84a3aa73572d927d673ba2153c5cd: Status 404 returned error can't find the container with id d35da75391c4c8e7c96b84a226f1de3160a84a3aa73572d927d673ba2153c5cd Feb 26 11:10:57 crc kubenswrapper[4699]: W0226 11:10:57.063349 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-8746dd7735b13fa5391cba71510e4192195b692f4db52e504c78bfc50e515d54 WatchSource:0}: Error finding container 8746dd7735b13fa5391cba71510e4192195b692f4db52e504c78bfc50e515d54: Status 404 returned error can't find the container with id 8746dd7735b13fa5391cba71510e4192195b692f4db52e504c78bfc50e515d54 Feb 26 11:10:57 crc kubenswrapper[4699]: W0226 11:10:57.065521 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f4b46e19a235baeabadab9914ce0093d472ebf5699e8e17e1f6a3ca1047aa08a WatchSource:0}: Error finding container f4b46e19a235baeabadab9914ce0093d472ebf5699e8e17e1f6a3ca1047aa08a: Status 404 returned error can't find the container with id f4b46e19a235baeabadab9914ce0093d472ebf5699e8e17e1f6a3ca1047aa08a Feb 26 11:10:57 crc kubenswrapper[4699]: W0226 11:10:57.066764 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ab153560b725156be9926758f43e0c12ac1085335ac564a5eae51d4751966c97 WatchSource:0}: Error finding container ab153560b725156be9926758f43e0c12ac1085335ac564a5eae51d4751966c97: Status 404 returned error can't find the container with id ab153560b725156be9926758f43e0c12ac1085335ac564a5eae51d4751966c97 Feb 26 11:10:57 crc kubenswrapper[4699]: W0226 11:10:57.070377 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-69ed669d0b1cfcbcf0f180f5869f0ffbeb79283372bb6f72c4a3483f1dcea9f1 WatchSource:0}: Error finding container 69ed669d0b1cfcbcf0f180f5869f0ffbeb79283372bb6f72c4a3483f1dcea9f1: Status 404 returned error can't find the container with id 69ed669d0b1cfcbcf0f180f5869f0ffbeb79283372bb6f72c4a3483f1dcea9f1 Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.118896 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:57 crc kubenswrapper[4699]: W0226 11:10:57.146259 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:57 crc kubenswrapper[4699]: E0226 11:10:57.146393 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.232045 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.233979 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.234048 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.234061 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.234095 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:10:57 crc kubenswrapper[4699]: E0226 11:10:57.234807 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.264695 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d35da75391c4c8e7c96b84a226f1de3160a84a3aa73572d927d673ba2153c5cd"} Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.265632 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"69ed669d0b1cfcbcf0f180f5869f0ffbeb79283372bb6f72c4a3483f1dcea9f1"} Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.266528 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ab153560b725156be9926758f43e0c12ac1085335ac564a5eae51d4751966c97"} Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.267554 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f4b46e19a235baeabadab9914ce0093d472ebf5699e8e17e1f6a3ca1047aa08a"} Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.268464 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8746dd7735b13fa5391cba71510e4192195b692f4db52e504c78bfc50e515d54"} Feb 26 11:10:57 crc kubenswrapper[4699]: W0226 11:10:57.362357 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:57 crc kubenswrapper[4699]: E0226 11:10:57.362442 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:57 crc kubenswrapper[4699]: W0226 11:10:57.447970 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:57 crc kubenswrapper[4699]: E0226 11:10:57.448168 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:57 crc kubenswrapper[4699]: E0226 11:10:57.522678 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="1.6s" Feb 26 11:10:57 crc kubenswrapper[4699]: I0226 11:10:57.943895 4699 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 11:10:57 crc kubenswrapper[4699]: E0226 11:10:57.945407 4699 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:58 crc kubenswrapper[4699]: I0226 11:10:58.035055 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:58 crc kubenswrapper[4699]: I0226 11:10:58.037032 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:58 crc kubenswrapper[4699]: I0226 11:10:58.037099 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:58 crc kubenswrapper[4699]: I0226 11:10:58.037140 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:58 crc kubenswrapper[4699]: I0226 11:10:58.037176 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:10:58 crc kubenswrapper[4699]: E0226 11:10:58.037769 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Feb 26 11:10:58 crc kubenswrapper[4699]: I0226 11:10:58.119952 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:58 crc kubenswrapper[4699]: W0226 11:10:58.859107 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:58 crc kubenswrapper[4699]: E0226 11:10:58.859798 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:59 crc kubenswrapper[4699]: I0226 11:10:59.119038 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:59 crc kubenswrapper[4699]: E0226 11:10:59.123512 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="3.2s" Feb 26 11:10:59 crc kubenswrapper[4699]: W0226 11:10:59.404936 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:10:59 crc kubenswrapper[4699]: E0226 11:10:59.405038 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:10:59 crc kubenswrapper[4699]: I0226 11:10:59.638567 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:10:59 crc kubenswrapper[4699]: I0226 11:10:59.640214 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:10:59 crc kubenswrapper[4699]: I0226 11:10:59.640281 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:10:59 crc kubenswrapper[4699]: I0226 11:10:59.640300 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:10:59 crc kubenswrapper[4699]: I0226 11:10:59.640340 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:10:59 crc kubenswrapper[4699]: E0226 11:10:59.640991 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Feb 26 11:11:00 crc kubenswrapper[4699]: W0226 11:11:00.035026 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:11:00 crc kubenswrapper[4699]: E0226 11:11:00.035092 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:11:00 crc kubenswrapper[4699]: I0226 11:11:00.118700 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:11:00 crc kubenswrapper[4699]: W0226 11:11:00.399234 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:11:00 crc kubenswrapper[4699]: E0226 11:11:00.399307 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.117925 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.280506 4699 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836" exitCode=0 Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.280685 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836"} Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.280841 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.282837 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.282885 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.282897 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.283033 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035" exitCode=0 Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.283112 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035"} Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.283169 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.284185 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.284225 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.284237 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.285227 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b4003edefbd2aac9a706e0d56e2791c34c4bc9a820e5cda0ab4cf3172fc4f5c6"} Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.285275 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bcbdf473c08abfc93be6ee643eb86aebdaf8cae59cbe4c844b800862b15f7434"} Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.286382 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.286826 4699 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b3a519dfbaf61432d8a2ac84be99b349ba10be387e76e3482dd82f11dacd1e2a" exitCode=0 Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.286885 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b3a519dfbaf61432d8a2ac84be99b349ba10be387e76e3482dd82f11dacd1e2a"} Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.286922 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.287211 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.287248 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.287263 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.288067 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.288108 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.288138 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.288581 4699 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07" exitCode=0 Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.288626 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07"} Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.288747 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.290423 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.290451 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:01 crc kubenswrapper[4699]: I0226 11:11:01.290488 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.119721 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.211824 4699 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 11:11:02 crc kubenswrapper[4699]: E0226 11:11:02.213173 4699 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.296306 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00"} Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.296359 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.296240 4699 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00" exitCode=0 Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.297905 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.297940 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.297952 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.303890 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6e0c4b60e5b4a7561832379bb034e4202ac67058677a2e7e362703e8f5952b83"} Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.303961 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7a914b4d63550a877b25fc802fe067357dc144ba29c8d4cff98b2d3d1c10baa7"} Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.303976 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f2e3cbce38225d1f5dc3e506a9bc813f8b1e76e2f3315a83ce984c0b51ad0b22"} Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.304271 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.306317 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.306351 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.306362 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.310704 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6"} Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.310779 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356"} Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.310794 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e"} Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.314081 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b1cda06107373ef4a7be9d68d9a39ed9f7351913e1deb1bd9e7d825d93ee54a7"} Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.314137 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.314163 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e0bc27153e659e049d639cf7b8963c1485433aed35f5efe5e88f1cc275d92a39"} Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.315515 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.315563 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.315580 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.316231 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"aadd2bede3bd40a4bdf48952422350955b12efacb3598661223bc1d386191df4"} Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.316340 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.317329 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.317374 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.317385 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:02 crc kubenswrapper[4699]: E0226 11:11:02.324547 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="6.4s" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.835735 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.841841 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.843959 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.844048 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.844068 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:02 crc kubenswrapper[4699]: I0226 11:11:02.844110 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:11:02 crc kubenswrapper[4699]: E0226 11:11:02.844856 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.119074 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:11:03 crc kubenswrapper[4699]: W0226 11:11:03.292209 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:11:03 crc kubenswrapper[4699]: E0226 11:11:03.292346 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.319361 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.323507 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec"} Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.323651 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c4fe347fb042f4777ab48cf760a13b19a6e283d98b2b10d80ea49490675e5357"} Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.323789 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.324994 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.325045 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.325062 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.326677 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.328092 4699 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4" exitCode=0 Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.328147 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4"} Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.328257 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.328299 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.328327 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.328268 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.329840 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.329868 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.329867 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.329841 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.329902 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.329916 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.329904 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.329996 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.329879 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.330320 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.330345 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.330356 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:03 crc kubenswrapper[4699]: I0226 11:11:03.865778 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.335711 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99"} Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.335772 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536"} Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.335784 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6"} Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.335788 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.335875 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.335896 4699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.335961 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.336573 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.337043 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.337085 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.337096 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.337210 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.337270 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.337295 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.337751 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.337852 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:04 crc kubenswrapper[4699]: I0226 11:11:04.337868 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.154298 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.170942 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.344094 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8"} Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.344176 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22"} Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.344212 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.344280 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.344280 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.345700 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.345740 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.345750 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.345813 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.345868 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.345879 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.346400 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.346443 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:05 crc kubenswrapper[4699]: I0226 11:11:05.346457 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:06 crc kubenswrapper[4699]: I0226 11:11:06.346837 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:06 crc kubenswrapper[4699]: I0226 11:11:06.346961 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:06 crc kubenswrapper[4699]: I0226 11:11:06.349102 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:06 crc kubenswrapper[4699]: I0226 11:11:06.349235 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:06 crc kubenswrapper[4699]: I0226 11:11:06.349102 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:06 crc kubenswrapper[4699]: I0226 11:11:06.349289 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:06 crc kubenswrapper[4699]: I0226 11:11:06.349308 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:06 crc kubenswrapper[4699]: I0226 11:11:06.349253 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:06 crc kubenswrapper[4699]: E0226 11:11:06.534929 4699 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:11:08 crc kubenswrapper[4699]: I0226 11:11:08.274292 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 26 11:11:08 crc kubenswrapper[4699]: I0226 11:11:08.274542 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:08 crc kubenswrapper[4699]: I0226 11:11:08.276256 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:08 crc kubenswrapper[4699]: I0226 11:11:08.276327 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:08 crc kubenswrapper[4699]: I0226 11:11:08.276342 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:09 crc kubenswrapper[4699]: I0226 11:11:09.245456 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:09 crc kubenswrapper[4699]: I0226 11:11:09.247104 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:09 crc kubenswrapper[4699]: I0226 11:11:09.247185 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:09 crc kubenswrapper[4699]: I0226 11:11:09.247197 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:09 crc kubenswrapper[4699]: I0226 11:11:09.247227 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:11:09 crc kubenswrapper[4699]: I0226 11:11:09.552090 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 26 11:11:09 crc kubenswrapper[4699]: I0226 11:11:09.552395 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:09 crc kubenswrapper[4699]: I0226 11:11:09.579752 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:09 crc kubenswrapper[4699]: I0226 11:11:09.579810 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:09 crc kubenswrapper[4699]: I0226 11:11:09.579829 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:10 crc kubenswrapper[4699]: I0226 11:11:10.376581 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:10 crc kubenswrapper[4699]: I0226 11:11:10.377220 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:10 crc kubenswrapper[4699]: I0226 11:11:10.378772 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:10 crc kubenswrapper[4699]: I0226 11:11:10.378860 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:10 crc kubenswrapper[4699]: I0226 11:11:10.378880 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:10 crc kubenswrapper[4699]: I0226 11:11:10.383079 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:10 crc kubenswrapper[4699]: I0226 11:11:10.457011 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:10 crc kubenswrapper[4699]: I0226 11:11:10.699166 4699 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 11:11:11 crc kubenswrapper[4699]: I0226 11:11:11.368837 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:11 crc kubenswrapper[4699]: I0226 11:11:11.370362 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:11 crc kubenswrapper[4699]: I0226 11:11:11.370433 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:11 crc kubenswrapper[4699]: I0226 11:11:11.370443 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:12 crc kubenswrapper[4699]: I0226 11:11:12.371383 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:12 crc kubenswrapper[4699]: I0226 11:11:12.372536 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:12 crc kubenswrapper[4699]: I0226 11:11:12.372591 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:12 crc kubenswrapper[4699]: I0226 11:11:12.372605 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:13 crc kubenswrapper[4699]: I0226 11:11:13.461108 4699 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:11:13 crc kubenswrapper[4699]: I0226 11:11:13.461919 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:11:13 crc kubenswrapper[4699]: E0226 11:11:13.966816 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:13Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 11:11:13 crc kubenswrapper[4699]: E0226 11:11:13.968622 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:13Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 11:11:13 crc kubenswrapper[4699]: E0226 11:11:13.972239 4699 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:13Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897c76e8f2e2097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.056074391 +0000 UTC m=+1.866900865,LastTimestamp:2026-02-26 11:10:56.056074391 +0000 UTC m=+1.866900865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:13 crc kubenswrapper[4699]: E0226 11:11:13.972684 4699 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:11:13 crc kubenswrapper[4699]: W0226 11:11:13.972764 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:13Z is after 2026-02-23T05:33:13Z Feb 26 11:11:13 crc kubenswrapper[4699]: E0226 11:11:13.972874 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:11:13 crc kubenswrapper[4699]: I0226 11:11:13.973938 4699 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 11:11:13 crc kubenswrapper[4699]: I0226 11:11:13.974004 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 11:11:13 crc kubenswrapper[4699]: W0226 11:11:13.976350 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:13Z is after 2026-02-23T05:33:13Z Feb 26 11:11:13 crc kubenswrapper[4699]: E0226 11:11:13.976434 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:11:13 crc kubenswrapper[4699]: I0226 11:11:13.979178 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:13Z is after 2026-02-23T05:33:13Z Feb 26 11:11:13 crc kubenswrapper[4699]: I0226 11:11:13.979346 4699 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 11:11:13 crc kubenswrapper[4699]: I0226 11:11:13.979437 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 11:11:13 crc kubenswrapper[4699]: W0226 11:11:13.979817 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:13Z is after 2026-02-23T05:33:13Z Feb 26 11:11:13 crc kubenswrapper[4699]: E0226 11:11:13.979891 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:11:14 crc kubenswrapper[4699]: I0226 11:11:14.121831 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:14Z is after 2026-02-23T05:33:13Z Feb 26 11:11:14 crc kubenswrapper[4699]: I0226 11:11:14.379451 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 11:11:14 crc kubenswrapper[4699]: I0226 11:11:14.382444 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c4fe347fb042f4777ab48cf760a13b19a6e283d98b2b10d80ea49490675e5357" exitCode=255 Feb 26 11:11:14 crc kubenswrapper[4699]: I0226 11:11:14.382530 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c4fe347fb042f4777ab48cf760a13b19a6e283d98b2b10d80ea49490675e5357"} Feb 26 11:11:14 crc kubenswrapper[4699]: I0226 11:11:14.382743 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:14 crc kubenswrapper[4699]: I0226 11:11:14.383683 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:14 crc kubenswrapper[4699]: I0226 11:11:14.383735 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:14 crc kubenswrapper[4699]: I0226 11:11:14.383748 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:14 crc kubenswrapper[4699]: I0226 11:11:14.384282 4699 scope.go:117] "RemoveContainer" containerID="c4fe347fb042f4777ab48cf760a13b19a6e283d98b2b10d80ea49490675e5357" Feb 26 11:11:15 crc kubenswrapper[4699]: I0226 11:11:15.349540 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:15Z is after 2026-02-23T05:33:13Z Feb 26 11:11:15 crc kubenswrapper[4699]: I0226 11:11:15.388635 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 11:11:15 crc kubenswrapper[4699]: I0226 11:11:15.390966 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8434a1d9c6c5f5c25ebe428c5621131d21b6a77cab0e7b5990805437c2e11e90"} Feb 26 11:11:15 crc kubenswrapper[4699]: I0226 11:11:15.391191 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:15 crc kubenswrapper[4699]: I0226 11:11:15.393421 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:15 crc kubenswrapper[4699]: I0226 11:11:15.393484 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:15 crc kubenswrapper[4699]: I0226 11:11:15.393497 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:15 crc kubenswrapper[4699]: I0226 11:11:15.402330 4699 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]log ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]etcd ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/generic-apiserver-start-informers ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/priority-and-fairness-filter ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/start-apiextensions-informers ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/start-apiextensions-controllers ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/crd-informer-synced ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/start-system-namespaces-controller ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 26 11:11:15 crc kubenswrapper[4699]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/bootstrap-controller ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/start-kube-aggregator-informers ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/apiservice-registration-controller ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/apiservice-discovery-controller ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]autoregister-completion ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/apiservice-openapi-controller ok Feb 26 11:11:15 crc kubenswrapper[4699]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 26 11:11:15 crc kubenswrapper[4699]: livez check failed Feb 26 11:11:15 crc kubenswrapper[4699]: I0226 11:11:15.402426 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:11:15 crc kubenswrapper[4699]: W0226 11:11:15.988026 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:15Z is after 2026-02-23T05:33:13Z Feb 26 11:11:15 crc kubenswrapper[4699]: E0226 11:11:15.988144 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:15Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:11:16 crc kubenswrapper[4699]: I0226 11:11:16.121611 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:16Z is after 2026-02-23T05:33:13Z Feb 26 11:11:16 crc kubenswrapper[4699]: I0226 11:11:16.397536 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 11:11:16 crc kubenswrapper[4699]: I0226 11:11:16.398515 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 11:11:16 crc kubenswrapper[4699]: I0226 11:11:16.401877 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8434a1d9c6c5f5c25ebe428c5621131d21b6a77cab0e7b5990805437c2e11e90" exitCode=255 Feb 26 11:11:16 crc kubenswrapper[4699]: I0226 11:11:16.401951 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8434a1d9c6c5f5c25ebe428c5621131d21b6a77cab0e7b5990805437c2e11e90"} Feb 26 11:11:16 crc kubenswrapper[4699]: I0226 11:11:16.402014 4699 scope.go:117] "RemoveContainer" containerID="c4fe347fb042f4777ab48cf760a13b19a6e283d98b2b10d80ea49490675e5357" Feb 26 11:11:16 crc kubenswrapper[4699]: I0226 11:11:16.402235 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:16 crc kubenswrapper[4699]: I0226 11:11:16.403873 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:16 crc kubenswrapper[4699]: I0226 11:11:16.403930 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:16 crc kubenswrapper[4699]: I0226 11:11:16.403944 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:16 crc kubenswrapper[4699]: I0226 11:11:16.404762 4699 scope.go:117] "RemoveContainer" containerID="8434a1d9c6c5f5c25ebe428c5621131d21b6a77cab0e7b5990805437c2e11e90" Feb 26 11:11:16 crc kubenswrapper[4699]: E0226 11:11:16.405034 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:11:16 crc kubenswrapper[4699]: E0226 11:11:16.535244 4699 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:11:17 crc kubenswrapper[4699]: I0226 11:11:17.121765 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:17Z is after 2026-02-23T05:33:13Z Feb 26 11:11:17 crc kubenswrapper[4699]: I0226 11:11:17.407561 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 11:11:17 crc kubenswrapper[4699]: I0226 11:11:17.710464 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:17 crc kubenswrapper[4699]: I0226 11:11:17.713076 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:17 crc kubenswrapper[4699]: I0226 11:11:17.715624 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:17 crc kubenswrapper[4699]: I0226 11:11:17.715680 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:17 crc kubenswrapper[4699]: I0226 11:11:17.715698 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:17 crc kubenswrapper[4699]: I0226 11:11:17.716502 4699 scope.go:117] "RemoveContainer" containerID="8434a1d9c6c5f5c25ebe428c5621131d21b6a77cab0e7b5990805437c2e11e90" Feb 26 11:11:17 crc kubenswrapper[4699]: E0226 11:11:17.716725 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.120916 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:18Z is after 2026-02-23T05:33:13Z Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.312156 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.312383 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.313963 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.314025 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.314039 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.325568 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.412769 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.413918 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.413977 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.413990 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.972255 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.972459 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.973526 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.973558 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.973572 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:18 crc kubenswrapper[4699]: I0226 11:11:18.974210 4699 scope.go:117] "RemoveContainer" containerID="8434a1d9c6c5f5c25ebe428c5621131d21b6a77cab0e7b5990805437c2e11e90" Feb 26 11:11:18 crc kubenswrapper[4699]: E0226 11:11:18.974390 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:11:19 crc kubenswrapper[4699]: I0226 11:11:19.121529 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:11:19Z is after 2026-02-23T05:33:13Z Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.122645 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.159942 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.160111 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.161294 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.161387 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.161403 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.162018 4699 scope.go:117] "RemoveContainer" containerID="8434a1d9c6c5f5c25ebe428c5621131d21b6a77cab0e7b5990805437c2e11e90" Feb 26 11:11:20 crc kubenswrapper[4699]: E0226 11:11:20.162257 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.164054 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.423355 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.424952 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.425235 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.425268 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.426375 4699 scope.go:117] "RemoveContainer" containerID="8434a1d9c6c5f5c25ebe428c5621131d21b6a77cab0e7b5990805437c2e11e90" Feb 26 11:11:20 crc kubenswrapper[4699]: E0226 11:11:20.426606 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.968811 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.969893 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.969942 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.969955 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:20 crc kubenswrapper[4699]: I0226 11:11:20.969981 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:11:20 crc kubenswrapper[4699]: E0226 11:11:20.971174 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 11:11:20 crc kubenswrapper[4699]: E0226 11:11:20.971181 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 11:11:21 crc kubenswrapper[4699]: I0226 11:11:21.125261 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:22 crc kubenswrapper[4699]: I0226 11:11:22.122460 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:22 crc kubenswrapper[4699]: W0226 11:11:22.587726 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 26 11:11:22 crc kubenswrapper[4699]: E0226 11:11:22.587780 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 26 11:11:23 crc kubenswrapper[4699]: I0226 11:11:23.124010 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:23 crc kubenswrapper[4699]: I0226 11:11:23.457164 4699 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:11:23 crc kubenswrapper[4699]: I0226 11:11:23.457298 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:11:23 crc kubenswrapper[4699]: W0226 11:11:23.683784 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:23 crc kubenswrapper[4699]: E0226 11:11:23.684374 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 11:11:23 crc kubenswrapper[4699]: E0226 11:11:23.978492 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e8f2e2097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.056074391 +0000 UTC m=+1.866900865,LastTimestamp:2026-02-26 11:10:56.056074391 +0000 UTC m=+1.866900865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:23 crc kubenswrapper[4699]: E0226 11:11:23.984323 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578464a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161597002 +0000 UTC m=+1.972423436,LastTimestamp:2026-02-26 11:10:56.161597002 +0000 UTC m=+1.972423436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:23 crc kubenswrapper[4699]: E0226 11:11:23.989924 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578a4b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161621172 +0000 UTC m=+1.972447606,LastTimestamp:2026-02-26 11:10:56.161621172 +0000 UTC m=+1.972447606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:23 crc kubenswrapper[4699]: E0226 11:11:23.996349 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578c9a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161630633 +0000 UTC m=+1.972457067,LastTimestamp:2026-02-26 11:10:56.161630633 +0000 UTC m=+1.972457067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.002189 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76eab5cff3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.528908093 +0000 UTC m=+2.339734547,LastTimestamp:2026-02-26 11:10:56.528908093 +0000 UTC m=+2.339734547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.007237 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578464a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578464a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161597002 +0000 UTC m=+1.972423436,LastTimestamp:2026-02-26 11:10:56.563165407 +0000 UTC m=+2.373991861,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.011936 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578a4b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578a4b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161621172 +0000 UTC m=+1.972447606,LastTimestamp:2026-02-26 11:10:56.563196988 +0000 UTC m=+2.374023422,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.016705 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578c9a9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578c9a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161630633 +0000 UTC m=+1.972457067,LastTimestamp:2026-02-26 11:10:56.563211318 +0000 UTC m=+2.374037762,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.021241 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578464a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578464a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161597002 +0000 UTC m=+1.972423436,LastTimestamp:2026-02-26 11:10:56.564917108 +0000 UTC m=+2.375743612,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.026034 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578a4b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578a4b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161621172 +0000 UTC m=+1.972447606,LastTimestamp:2026-02-26 11:10:56.564978779 +0000 UTC m=+2.375805243,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.031057 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578c9a9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578c9a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161630633 +0000 UTC m=+1.972457067,LastTimestamp:2026-02-26 11:10:56.56500369 +0000 UTC m=+2.375830164,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.035738 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578464a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578464a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161597002 +0000 UTC m=+1.972423436,LastTimestamp:2026-02-26 11:10:56.566262887 +0000 UTC m=+2.377089351,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.039957 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578a4b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578a4b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161621172 +0000 UTC m=+1.972447606,LastTimestamp:2026-02-26 11:10:56.566305848 +0000 UTC m=+2.377132322,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.044962 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578c9a9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578c9a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161630633 +0000 UTC m=+1.972457067,LastTimestamp:2026-02-26 11:10:56.566332149 +0000 UTC m=+2.377158623,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.049645 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578464a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578464a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161597002 +0000 UTC m=+1.972423436,LastTimestamp:2026-02-26 11:10:56.567544114 +0000 UTC m=+2.378370558,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.054347 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578a4b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578a4b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161621172 +0000 UTC m=+1.972447606,LastTimestamp:2026-02-26 11:10:56.567562034 +0000 UTC m=+2.378388478,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.060163 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578c9a9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578c9a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161630633 +0000 UTC m=+1.972457067,LastTimestamp:2026-02-26 11:10:56.567576185 +0000 UTC m=+2.378402629,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.065506 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578464a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578464a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161597002 +0000 UTC m=+1.972423436,LastTimestamp:2026-02-26 11:10:56.567650277 +0000 UTC m=+2.378476751,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.067243 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578a4b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578a4b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161621172 +0000 UTC m=+1.972447606,LastTimestamp:2026-02-26 11:10:56.567684278 +0000 UTC m=+2.378510752,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.071737 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578c9a9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578c9a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161630633 +0000 UTC m=+1.972457067,LastTimestamp:2026-02-26 11:10:56.567708849 +0000 UTC m=+2.378535323,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.076393 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578464a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578464a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161597002 +0000 UTC m=+1.972423436,LastTimestamp:2026-02-26 11:10:56.568827501 +0000 UTC m=+2.379653965,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.080416 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578a4b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578a4b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161621172 +0000 UTC m=+1.972447606,LastTimestamp:2026-02-26 11:10:56.568858072 +0000 UTC m=+2.379684546,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.084842 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578c9a9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578c9a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161630633 +0000 UTC m=+1.972457067,LastTimestamp:2026-02-26 11:10:56.568873572 +0000 UTC m=+2.379700046,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.089169 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578464a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578464a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161597002 +0000 UTC m=+1.972423436,LastTimestamp:2026-02-26 11:10:56.569291744 +0000 UTC m=+2.380118178,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.094385 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c76e9578a4b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c76e9578a4b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:56.161621172 +0000 UTC m=+1.972447606,LastTimestamp:2026-02-26 11:10:56.569327155 +0000 UTC m=+2.380153589,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.100189 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76ecb68ca75 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:57.066551925 +0000 UTC m=+2.877378359,LastTimestamp:2026-02-26 11:10:57.066551925 +0000 UTC m=+2.877378359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.104949 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76ecb70313d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:57.067036989 +0000 UTC m=+2.877863423,LastTimestamp:2026-02-26 11:10:57.067036989 +0000 UTC m=+2.877863423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.112728 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c76ecb862b88 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:57.06847732 +0000 UTC m=+2.879303754,LastTimestamp:2026-02-26 11:10:57.06847732 +0000 UTC m=+2.879303754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.117305 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c76ecb8ae145 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:57.068785989 +0000 UTC m=+2.879612423,LastTimestamp:2026-02-26 11:10:57.068785989 +0000 UTC m=+2.879612423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: I0226 11:11:24.121248 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.121236 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76ecbf09a6b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:10:57.075452523 +0000 UTC m=+2.886278947,LastTimestamp:2026-02-26 11:10:57.075452523 +0000 UTC m=+2.886278947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.126261 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fa8017535 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.767544629 +0000 UTC m=+6.578371073,LastTimestamp:2026-02-26 11:11:00.767544629 +0000 UTC m=+6.578371073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.131321 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76fa83340c4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.770808004 +0000 UTC m=+6.581634438,LastTimestamp:2026-02-26 11:11:00.770808004 +0000 UTC m=+6.581634438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.136298 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76fa8667987 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.774164871 +0000 UTC m=+6.584991305,LastTimestamp:2026-02-26 11:11:00.774164871 +0000 UTC m=+6.584991305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.140475 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c76fa871e765 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.774913893 +0000 UTC m=+6.585740327,LastTimestamp:2026-02-26 11:11:00.774913893 +0000 UTC m=+6.585740327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.144856 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c76fa8721415 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.774925333 +0000 UTC m=+6.585751767,LastTimestamp:2026-02-26 11:11:00.774925333 +0000 UTC m=+6.585751767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.148816 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fa8a9560a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.778546698 +0000 UTC m=+6.589373132,LastTimestamp:2026-02-26 11:11:00.778546698 +0000 UTC m=+6.589373132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.153786 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fa8ce3600 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.780963328 +0000 UTC m=+6.591789762,LastTimestamp:2026-02-26 11:11:00.780963328 +0000 UTC m=+6.591789762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.158567 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76fa90c7ddd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.785044957 +0000 UTC m=+6.595871391,LastTimestamp:2026-02-26 11:11:00.785044957 +0000 UTC m=+6.595871391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.163185 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c76fa959761b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.790089243 +0000 UTC m=+6.600915677,LastTimestamp:2026-02-26 11:11:00.790089243 +0000 UTC m=+6.600915677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.168108 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c76fa9a7edea openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.795231722 +0000 UTC m=+6.606058156,LastTimestamp:2026-02-26 11:11:00.795231722 +0000 UTC m=+6.606058156,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.172504 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76fa9a9c902 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.795353346 +0000 UTC m=+6.606179780,LastTimestamp:2026-02-26 11:11:00.795353346 +0000 UTC m=+6.606179780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.177607 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fbb1a2118 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.0879286 +0000 UTC m=+6.898755034,LastTimestamp:2026-02-26 11:11:01.0879286 +0000 UTC m=+6.898755034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.181906 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fbc9e2cd0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.113359568 +0000 UTC m=+6.924186002,LastTimestamp:2026-02-26 11:11:01.113359568 +0000 UTC m=+6.924186002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.186415 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fbcb413c5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.114794949 +0000 UTC m=+6.925621383,LastTimestamp:2026-02-26 11:11:01.114794949 +0000 UTC m=+6.925621383,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.191058 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76fc6dc9419 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.285221401 +0000 UTC m=+7.096047835,LastTimestamp:2026-02-26 11:11:01.285221401 +0000 UTC m=+7.096047835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.194387 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76fc6e7b571 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.285950833 +0000 UTC m=+7.096777267,LastTimestamp:2026-02-26 11:11:01.285950833 +0000 UTC m=+7.096777267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.198094 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fc7060b18 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.28793884 +0000 UTC m=+7.098765274,LastTimestamp:2026-02-26 11:11:01.28793884 +0000 UTC m=+7.098765274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.202878 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c76fc71a5a6d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.289269869 +0000 UTC m=+7.100096303,LastTimestamp:2026-02-26 11:11:01.289269869 +0000 UTC m=+7.100096303,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.206998 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c76fc73dde5e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.291597406 +0000 UTC m=+7.102423860,LastTimestamp:2026-02-26 11:11:01.291597406 +0000 UTC m=+7.102423860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.211568 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fc83b53ec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.308208108 +0000 UTC m=+7.119034542,LastTimestamp:2026-02-26 11:11:01.308208108 +0000 UTC m=+7.119034542,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.215836 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fc871584b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.311748171 +0000 UTC m=+7.122574605,LastTimestamp:2026-02-26 11:11:01.311748171 +0000 UTC m=+7.122574605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.219945 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76fd4ba234f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.517845327 +0000 UTC m=+7.328671751,LastTimestamp:2026-02-26 11:11:01.517845327 +0000 UTC m=+7.328671751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.223737 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fd4f89c78 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.521939576 +0000 UTC m=+7.332766010,LastTimestamp:2026-02-26 11:11:01.521939576 +0000 UTC m=+7.332766010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.227557 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76fd50bb6ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.523191502 +0000 UTC m=+7.334017936,LastTimestamp:2026-02-26 11:11:01.523191502 +0000 UTC m=+7.334017936,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.231401 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c76fd55b3a02 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.528402434 +0000 UTC m=+7.339228868,LastTimestamp:2026-02-26 11:11:01.528402434 +0000 UTC m=+7.339228868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.234956 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76fd5869186 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.531242886 +0000 UTC m=+7.342069320,LastTimestamp:2026-02-26 11:11:01.531242886 +0000 UTC m=+7.342069320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.239574 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76fd5974654 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.532337748 +0000 UTC m=+7.343164182,LastTimestamp:2026-02-26 11:11:01.532337748 +0000 UTC m=+7.343164182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.243735 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c76fd5b3ff6e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.534220142 +0000 UTC m=+7.345046576,LastTimestamp:2026-02-26 11:11:01.534220142 +0000 UTC m=+7.345046576,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.247698 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fd5f53604 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.538493956 +0000 UTC m=+7.349320390,LastTimestamp:2026-02-26 11:11:01.538493956 +0000 UTC m=+7.349320390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.251595 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76fd79cfe8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.566267022 +0000 UTC m=+7.377093456,LastTimestamp:2026-02-26 11:11:01.566267022 +0000 UTC m=+7.377093456,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.255456 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76fd7b20cfc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.567646972 +0000 UTC m=+7.378473406,LastTimestamp:2026-02-26 11:11:01.567646972 +0000 UTC m=+7.378473406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.259581 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c76fd8eb947b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.588194427 +0000 UTC m=+7.399020861,LastTimestamp:2026-02-26 11:11:01.588194427 +0000 UTC m=+7.399020861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.264018 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76fe16e158e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.730964878 +0000 UTC m=+7.541791312,LastTimestamp:2026-02-26 11:11:01.730964878 +0000 UTC m=+7.541791312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.268565 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76fe4ee9029 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.789716521 +0000 UTC m=+7.600542975,LastTimestamp:2026-02-26 11:11:01.789716521 +0000 UTC m=+7.600542975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.272865 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76fe50a12d4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.791519444 +0000 UTC m=+7.602345878,LastTimestamp:2026-02-26 11:11:01.791519444 +0000 UTC m=+7.602345878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.277965 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76fe55986ad openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.796726445 +0000 UTC m=+7.607552879,LastTimestamp:2026-02-26 11:11:01.796726445 +0000 UTC m=+7.607552879,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.283309 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76fe749e7da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.829257178 +0000 UTC m=+7.640083612,LastTimestamp:2026-02-26 11:11:01.829257178 +0000 UTC m=+7.640083612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.288709 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76fe76d3a61 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.831572065 +0000 UTC m=+7.642398499,LastTimestamp:2026-02-26 11:11:01.831572065 +0000 UTC m=+7.642398499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.293460 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76ff3db399f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.040107423 +0000 UTC m=+7.850933857,LastTimestamp:2026-02-26 11:11:02.040107423 +0000 UTC m=+7.850933857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.297902 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76ff583ccf9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.067932409 +0000 UTC m=+7.878758843,LastTimestamp:2026-02-26 11:11:02.067932409 +0000 UTC m=+7.878758843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.302450 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c76ff6076bbe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.07655827 +0000 UTC m=+7.887384704,LastTimestamp:2026-02-26 11:11:02.07655827 +0000 UTC m=+7.887384704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.306422 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c76ff60a39ed openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.076742125 +0000 UTC m=+7.887568559,LastTimestamp:2026-02-26 11:11:02.076742125 +0000 UTC m=+7.887568559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.310821 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76ff71b4db2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.094638514 +0000 UTC m=+7.905464948,LastTimestamp:2026-02-26 11:11:02.094638514 +0000 UTC m=+7.905464948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.314831 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c76ff7344e8b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.096277131 +0000 UTC m=+7.907103565,LastTimestamp:2026-02-26 11:11:02.096277131 +0000 UTC m=+7.907103565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.318243 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c77002744e49 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.285020745 +0000 UTC m=+8.095847199,LastTimestamp:2026-02-26 11:11:02.285020745 +0000 UTC m=+8.095847199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.322575 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c7700387ab58 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.303066968 +0000 UTC m=+8.113893402,LastTimestamp:2026-02-26 11:11:02.303066968 +0000 UTC m=+8.113893402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.328960 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c770038e7869 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.303512681 +0000 UTC m=+8.114339125,LastTimestamp:2026-02-26 11:11:02.303512681 +0000 UTC m=+8.114339125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.333739 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c770039fad76 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.304640374 +0000 UTC m=+8.115466808,LastTimestamp:2026-02-26 11:11:02.304640374 +0000 UTC m=+8.115466808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.340752 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c77015d455bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.610081211 +0000 UTC m=+8.420907635,LastTimestamp:2026-02-26 11:11:02.610081211 +0000 UTC m=+8.420907635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.347039 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7701610c28a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.614041226 +0000 UTC m=+8.424867660,LastTimestamp:2026-02-26 11:11:02.614041226 +0000 UTC m=+8.424867660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.351382 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c7701702966e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.629889646 +0000 UTC m=+8.440716080,LastTimestamp:2026-02-26 11:11:02.629889646 +0000 UTC m=+8.440716080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.354926 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c770176dd91e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.63691907 +0000 UTC m=+8.447745504,LastTimestamp:2026-02-26 11:11:02.63691907 +0000 UTC m=+8.447745504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.360164 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c77040d4b982 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:03.331527042 +0000 UTC m=+9.142353486,LastTimestamp:2026-02-26 11:11:03.331527042 +0000 UTC m=+9.142353486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.363948 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7704cf19855 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:03.534745685 +0000 UTC m=+9.345572119,LastTimestamp:2026-02-26 11:11:03.534745685 +0000 UTC m=+9.345572119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.367768 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7704db06af8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:03.547251448 +0000 UTC m=+9.358077882,LastTimestamp:2026-02-26 11:11:03.547251448 +0000 UTC m=+9.358077882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.371889 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7704dc8e20a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:03.548854794 +0000 UTC m=+9.359681228,LastTimestamp:2026-02-26 11:11:03.548854794 +0000 UTC m=+9.359681228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.375705 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c77060c39f52 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:03.867277138 +0000 UTC m=+9.678103572,LastTimestamp:2026-02-26 11:11:03.867277138 +0000 UTC m=+9.678103572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.377396 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c770618f0e37 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:03.880609335 +0000 UTC m=+9.691435769,LastTimestamp:2026-02-26 11:11:03.880609335 +0000 UTC m=+9.691435769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.379320 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c77061a7ff7a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:03.882243962 +0000 UTC m=+9.693070396,LastTimestamp:2026-02-26 11:11:03.882243962 +0000 UTC m=+9.693070396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.381910 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7706c99abc7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:04.065854407 +0000 UTC m=+9.876680841,LastTimestamp:2026-02-26 11:11:04.065854407 +0000 UTC m=+9.876680841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.386209 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7706d536858 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:04.07802684 +0000 UTC m=+9.888853274,LastTimestamp:2026-02-26 11:11:04.07802684 +0000 UTC m=+9.888853274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.391073 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7706d6fc8b1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:04.079886513 +0000 UTC m=+9.890712947,LastTimestamp:2026-02-26 11:11:04.079886513 +0000 UTC m=+9.890712947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.396715 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c77088e25879 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:04.540379257 +0000 UTC m=+10.351205691,LastTimestamp:2026-02-26 11:11:04.540379257 +0000 UTC m=+10.351205691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.401368 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c77089c0bb44 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:04.55495354 +0000 UTC m=+10.365779964,LastTimestamp:2026-02-26 11:11:04.55495354 +0000 UTC m=+10.365779964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.405071 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c77089d4cf42 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:04.556269378 +0000 UTC m=+10.367095812,LastTimestamp:2026-02-26 11:11:04.556269378 +0000 UTC m=+10.367095812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.410303 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c770957710f8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:04.751452408 +0000 UTC m=+10.562278842,LastTimestamp:2026-02-26 11:11:04.751452408 +0000 UTC m=+10.562278842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.414828 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7709673fd73 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:04.768028019 +0000 UTC m=+10.578854453,LastTimestamp:2026-02-26 11:11:04.768028019 +0000 UTC m=+10.578854453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.422377 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 11:11:24 crc kubenswrapper[4699]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c7729ca57718 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 26 11:11:24 crc kubenswrapper[4699]: body: Feb 26 11:11:24 crc kubenswrapper[4699]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:13.461868312 +0000 UTC m=+19.272694766,LastTimestamp:2026-02-26 11:11:13.461868312 +0000 UTC m=+19.272694766,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:11:24 crc kubenswrapper[4699]: > Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.426692 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c7729ca746a9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:13.461986985 +0000 UTC m=+19.272813429,LastTimestamp:2026-02-26 11:11:13.461986985 +0000 UTC m=+19.272813429,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.430998 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 11:11:24 crc kubenswrapper[4699]: &Event{ObjectMeta:{kube-apiserver-crc.1897c772bb2bb73b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 11:11:24 crc kubenswrapper[4699]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 11:11:24 crc kubenswrapper[4699]: Feb 26 11:11:24 crc kubenswrapper[4699]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:13.973983035 +0000 UTC m=+19.784809479,LastTimestamp:2026-02-26 11:11:13.973983035 +0000 UTC m=+19.784809479,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:11:24 crc kubenswrapper[4699]: > Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.435221 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c772bb2c864e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:13.974036046 +0000 UTC m=+19.784862490,LastTimestamp:2026-02-26 11:11:13.974036046 +0000 UTC m=+19.784862490,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.438850 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c772bb2bb73b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 11:11:24 crc kubenswrapper[4699]: &Event{ObjectMeta:{kube-apiserver-crc.1897c772bb2bb73b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 11:11:24 crc kubenswrapper[4699]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 11:11:24 crc kubenswrapper[4699]: Feb 26 11:11:24 crc kubenswrapper[4699]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:13.973983035 +0000 UTC m=+19.784809479,LastTimestamp:2026-02-26 11:11:13.979407907 +0000 UTC m=+19.790234341,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:11:24 crc kubenswrapper[4699]: > Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.442444 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c772bb2c864e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c772bb2c864e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:13.974036046 +0000 UTC m=+19.784862490,LastTimestamp:2026-02-26 11:11:13.979471019 +0000 UTC m=+19.790297463,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.446803 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c770039fad76\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c770039fad76 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.304640374 +0000 UTC m=+8.115466808,LastTimestamp:2026-02-26 11:11:14.385739106 +0000 UTC m=+20.196565540,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.450530 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c77015d455bb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c77015d455bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.610081211 +0000 UTC m=+8.420907635,LastTimestamp:2026-02-26 11:11:14.567235821 +0000 UTC m=+20.378062255,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.454904 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c7701702966e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c7701702966e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:02.629889646 +0000 UTC m=+8.440716080,LastTimestamp:2026-02-26 11:11:14.576824281 +0000 UTC m=+20.387650715,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.462081 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 11:11:24 crc kubenswrapper[4699]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c774f06b1a12 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 11:11:24 crc kubenswrapper[4699]: body: Feb 26 11:11:24 crc kubenswrapper[4699]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:23.457264146 +0000 UTC m=+29.268090590,LastTimestamp:2026-02-26 11:11:23.457264146 +0000 UTC m=+29.268090590,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:11:24 crc kubenswrapper[4699]: > Feb 26 11:11:24 crc kubenswrapper[4699]: E0226 11:11:24.466087 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c774f06c2daa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:23.457334698 +0000 UTC m=+29.268161132,LastTimestamp:2026-02-26 11:11:23.457334698 +0000 UTC m=+29.268161132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:25 crc kubenswrapper[4699]: W0226 11:11:25.093018 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 26 11:11:25 crc kubenswrapper[4699]: E0226 11:11:25.093110 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 11:11:25 crc kubenswrapper[4699]: I0226 11:11:25.128452 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:26 crc kubenswrapper[4699]: I0226 11:11:26.124100 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:26 crc kubenswrapper[4699]: E0226 11:11:26.535337 4699 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:11:27 crc kubenswrapper[4699]: I0226 11:11:27.123269 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:27 crc kubenswrapper[4699]: I0226 11:11:27.971696 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:27 crc kubenswrapper[4699]: I0226 11:11:27.973427 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:27 crc kubenswrapper[4699]: I0226 11:11:27.973468 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:27 crc kubenswrapper[4699]: I0226 11:11:27.973477 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:27 crc kubenswrapper[4699]: I0226 11:11:27.973684 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:11:27 crc kubenswrapper[4699]: E0226 11:11:27.977907 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 11:11:27 crc kubenswrapper[4699]: E0226 11:11:27.978222 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 11:11:28 crc kubenswrapper[4699]: I0226 11:11:28.124391 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:29 crc kubenswrapper[4699]: I0226 11:11:29.124014 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:29 crc kubenswrapper[4699]: W0226 11:11:29.968849 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 26 11:11:29 crc kubenswrapper[4699]: E0226 11:11:29.968936 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 26 11:11:30 crc kubenswrapper[4699]: I0226 11:11:30.124751 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:30 crc kubenswrapper[4699]: I0226 11:11:30.658554 4699 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 11:11:30 crc kubenswrapper[4699]: I0226 11:11:30.675106 4699 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 11:11:31 crc kubenswrapper[4699]: I0226 11:11:31.123199 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:32 crc kubenswrapper[4699]: I0226 11:11:32.122790 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:32 crc kubenswrapper[4699]: I0226 11:11:32.472618 4699 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:40112->192.168.126.11:10357: read: connection reset by peer" start-of-body= Feb 26 11:11:32 crc kubenswrapper[4699]: I0226 11:11:32.472711 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:40112->192.168.126.11:10357: read: connection reset by peer" Feb 26 11:11:32 crc kubenswrapper[4699]: I0226 11:11:32.472771 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:32 crc kubenswrapper[4699]: I0226 11:11:32.472912 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:32 crc kubenswrapper[4699]: I0226 11:11:32.474627 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:32 crc kubenswrapper[4699]: I0226 11:11:32.474703 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:32 crc kubenswrapper[4699]: I0226 11:11:32.474733 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:32 crc kubenswrapper[4699]: I0226 11:11:32.475951 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"b4003edefbd2aac9a706e0d56e2791c34c4bc9a820e5cda0ab4cf3172fc4f5c6"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 26 11:11:32 crc kubenswrapper[4699]: I0226 11:11:32.476245 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://b4003edefbd2aac9a706e0d56e2791c34c4bc9a820e5cda0ab4cf3172fc4f5c6" gracePeriod=30 Feb 26 11:11:32 crc kubenswrapper[4699]: E0226 11:11:32.477819 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 11:11:32 crc kubenswrapper[4699]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c77709c743b3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:40112->192.168.126.11:10357: read: connection reset by peer Feb 26 11:11:32 crc kubenswrapper[4699]: body: Feb 26 11:11:32 crc kubenswrapper[4699]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:32.472669107 +0000 UTC m=+38.283495541,LastTimestamp:2026-02-26 11:11:32.472669107 +0000 UTC m=+38.283495541,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:11:32 crc kubenswrapper[4699]: > Feb 26 11:11:32 crc kubenswrapper[4699]: E0226 11:11:32.483794 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c77709c8545d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:40112->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:32.472738909 +0000 UTC m=+38.283565333,LastTimestamp:2026-02-26 11:11:32.472738909 +0000 UTC m=+38.283565333,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:32 crc kubenswrapper[4699]: E0226 11:11:32.492601 4699 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c77709fd7be5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:32.476222437 +0000 UTC m=+38.287048931,LastTimestamp:2026-02-26 11:11:32.476222437 +0000 UTC m=+38.287048931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:33 crc kubenswrapper[4699]: E0226 11:11:33.067816 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c76fa8ce3600\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fa8ce3600 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:00.780963328 +0000 UTC m=+6.591789762,LastTimestamp:2026-02-26 11:11:33.062108385 +0000 UTC m=+38.872934819,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:33 crc kubenswrapper[4699]: I0226 11:11:33.124223 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:33 crc kubenswrapper[4699]: E0226 11:11:33.405257 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c76fbb1a2118\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fbb1a2118 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.0879286 +0000 UTC m=+6.898755034,LastTimestamp:2026-02-26 11:11:33.403473437 +0000 UTC m=+39.214299871,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:33 crc kubenswrapper[4699]: E0226 11:11:33.424021 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c76fbc9e2cd0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c76fbc9e2cd0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:01.113359568 +0000 UTC m=+6.924186002,LastTimestamp:2026-02-26 11:11:33.422559584 +0000 UTC m=+39.233386018,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:33 crc kubenswrapper[4699]: I0226 11:11:33.460987 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 11:11:33 crc kubenswrapper[4699]: I0226 11:11:33.461336 4699 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b4003edefbd2aac9a706e0d56e2791c34c4bc9a820e5cda0ab4cf3172fc4f5c6" exitCode=255 Feb 26 11:11:33 crc kubenswrapper[4699]: I0226 11:11:33.461393 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b4003edefbd2aac9a706e0d56e2791c34c4bc9a820e5cda0ab4cf3172fc4f5c6"} Feb 26 11:11:33 crc kubenswrapper[4699]: I0226 11:11:33.461475 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55229c06747f2b5d388af00f4d2aa770f2786ea7f8015579fb05381eee44235f"} Feb 26 11:11:33 crc kubenswrapper[4699]: I0226 11:11:33.461672 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:33 crc kubenswrapper[4699]: I0226 11:11:33.462923 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:33 crc kubenswrapper[4699]: I0226 11:11:33.462974 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:33 crc kubenswrapper[4699]: I0226 11:11:33.462983 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:34 crc kubenswrapper[4699]: I0226 11:11:34.123656 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:34 crc kubenswrapper[4699]: I0226 11:11:34.978506 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:34 crc kubenswrapper[4699]: I0226 11:11:34.981091 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:34 crc kubenswrapper[4699]: I0226 11:11:34.981169 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:34 crc kubenswrapper[4699]: I0226 11:11:34.981190 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:34 crc kubenswrapper[4699]: I0226 11:11:34.981234 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:11:34 crc kubenswrapper[4699]: E0226 11:11:34.986439 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 11:11:34 crc kubenswrapper[4699]: E0226 11:11:34.986859 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 11:11:35 crc kubenswrapper[4699]: I0226 11:11:35.123169 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:35 crc kubenswrapper[4699]: I0226 11:11:35.260388 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:35 crc kubenswrapper[4699]: I0226 11:11:35.262344 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:35 crc kubenswrapper[4699]: I0226 11:11:35.262401 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:35 crc kubenswrapper[4699]: I0226 11:11:35.262413 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:35 crc kubenswrapper[4699]: I0226 11:11:35.263108 4699 scope.go:117] "RemoveContainer" containerID="8434a1d9c6c5f5c25ebe428c5621131d21b6a77cab0e7b5990805437c2e11e90" Feb 26 11:11:36 crc kubenswrapper[4699]: I0226 11:11:36.122521 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:36 crc kubenswrapper[4699]: I0226 11:11:36.469180 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 11:11:36 crc kubenswrapper[4699]: I0226 11:11:36.469600 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 11:11:36 crc kubenswrapper[4699]: I0226 11:11:36.471258 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c2551d40b2ef08b55319c037bd0cce11c3db01d06a454a47ac46c3f5fea58b7f" exitCode=255 Feb 26 11:11:36 crc kubenswrapper[4699]: I0226 11:11:36.471310 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c2551d40b2ef08b55319c037bd0cce11c3db01d06a454a47ac46c3f5fea58b7f"} Feb 26 11:11:36 crc kubenswrapper[4699]: I0226 11:11:36.471351 4699 scope.go:117] "RemoveContainer" containerID="8434a1d9c6c5f5c25ebe428c5621131d21b6a77cab0e7b5990805437c2e11e90" Feb 26 11:11:36 crc kubenswrapper[4699]: I0226 11:11:36.471455 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:36 crc kubenswrapper[4699]: I0226 11:11:36.472318 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:36 crc kubenswrapper[4699]: I0226 11:11:36.472349 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:36 crc kubenswrapper[4699]: I0226 11:11:36.472361 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:36 crc kubenswrapper[4699]: I0226 11:11:36.472854 4699 scope.go:117] "RemoveContainer" containerID="c2551d40b2ef08b55319c037bd0cce11c3db01d06a454a47ac46c3f5fea58b7f" Feb 26 11:11:36 crc kubenswrapper[4699]: E0226 11:11:36.473046 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:11:36 crc kubenswrapper[4699]: E0226 11:11:36.535500 4699 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:11:37 crc kubenswrapper[4699]: I0226 11:11:37.123267 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:37 crc kubenswrapper[4699]: I0226 11:11:37.475938 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 11:11:37 crc kubenswrapper[4699]: I0226 11:11:37.710363 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:37 crc kubenswrapper[4699]: I0226 11:11:37.710553 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:37 crc kubenswrapper[4699]: I0226 11:11:37.711687 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:37 crc kubenswrapper[4699]: I0226 11:11:37.711739 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:37 crc kubenswrapper[4699]: I0226 11:11:37.711752 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:37 crc kubenswrapper[4699]: I0226 11:11:37.712347 4699 scope.go:117] "RemoveContainer" containerID="c2551d40b2ef08b55319c037bd0cce11c3db01d06a454a47ac46c3f5fea58b7f" Feb 26 11:11:37 crc kubenswrapper[4699]: E0226 11:11:37.712549 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:11:38 crc kubenswrapper[4699]: I0226 11:11:38.123432 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:38 crc kubenswrapper[4699]: I0226 11:11:38.973026 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:38 crc kubenswrapper[4699]: I0226 11:11:38.973367 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:38 crc kubenswrapper[4699]: I0226 11:11:38.975177 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:38 crc kubenswrapper[4699]: I0226 11:11:38.975247 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:38 crc kubenswrapper[4699]: I0226 11:11:38.975268 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:38 crc kubenswrapper[4699]: I0226 11:11:38.975970 4699 scope.go:117] "RemoveContainer" containerID="c2551d40b2ef08b55319c037bd0cce11c3db01d06a454a47ac46c3f5fea58b7f" Feb 26 11:11:38 crc kubenswrapper[4699]: E0226 11:11:38.976195 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:11:39 crc kubenswrapper[4699]: I0226 11:11:39.122627 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:40 crc kubenswrapper[4699]: I0226 11:11:40.123879 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:40 crc kubenswrapper[4699]: I0226 11:11:40.377230 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:40 crc kubenswrapper[4699]: I0226 11:11:40.377441 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:40 crc kubenswrapper[4699]: I0226 11:11:40.379098 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:40 crc kubenswrapper[4699]: I0226 11:11:40.379167 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:40 crc kubenswrapper[4699]: I0226 11:11:40.379178 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:40 crc kubenswrapper[4699]: I0226 11:11:40.457204 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:40 crc kubenswrapper[4699]: I0226 11:11:40.486458 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:40 crc kubenswrapper[4699]: I0226 11:11:40.488181 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:40 crc kubenswrapper[4699]: I0226 11:11:40.488233 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:40 crc kubenswrapper[4699]: I0226 11:11:40.488244 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:41 crc kubenswrapper[4699]: I0226 11:11:41.123054 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:41 crc kubenswrapper[4699]: I0226 11:11:41.986950 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:41 crc kubenswrapper[4699]: I0226 11:11:41.988828 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:41 crc kubenswrapper[4699]: I0226 11:11:41.988879 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:41 crc kubenswrapper[4699]: I0226 11:11:41.988893 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:41 crc kubenswrapper[4699]: I0226 11:11:41.988933 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:11:41 crc kubenswrapper[4699]: E0226 11:11:41.993099 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 11:11:41 crc kubenswrapper[4699]: E0226 11:11:41.993224 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 11:11:42 crc kubenswrapper[4699]: I0226 11:11:42.123512 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:42 crc kubenswrapper[4699]: W0226 11:11:42.806818 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:42 crc kubenswrapper[4699]: E0226 11:11:42.807361 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 11:11:43 crc kubenswrapper[4699]: I0226 11:11:43.123990 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:43 crc kubenswrapper[4699]: I0226 11:11:43.458094 4699 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:11:43 crc kubenswrapper[4699]: I0226 11:11:43.458227 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:11:43 crc kubenswrapper[4699]: E0226 11:11:43.463520 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c774f06b1a12\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 11:11:43 crc kubenswrapper[4699]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c774f06b1a12 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 11:11:43 crc kubenswrapper[4699]: body: Feb 26 11:11:43 crc kubenswrapper[4699]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:23.457264146 +0000 UTC m=+29.268090590,LastTimestamp:2026-02-26 11:11:43.45819737 +0000 UTC m=+49.269023804,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:11:43 crc kubenswrapper[4699]: > Feb 26 11:11:43 crc kubenswrapper[4699]: E0226 11:11:43.468629 4699 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c774f06c2daa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c774f06c2daa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:11:23.457334698 +0000 UTC m=+29.268161132,LastTimestamp:2026-02-26 11:11:43.458268522 +0000 UTC m=+49.269094956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:44 crc kubenswrapper[4699]: I0226 11:11:44.120097 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:45 crc kubenswrapper[4699]: I0226 11:11:45.123667 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:46 crc kubenswrapper[4699]: I0226 11:11:46.123763 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:46 crc kubenswrapper[4699]: W0226 11:11:46.416287 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 26 11:11:46 crc kubenswrapper[4699]: E0226 11:11:46.416349 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 26 11:11:46 crc kubenswrapper[4699]: E0226 11:11:46.535681 4699 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:11:47 crc kubenswrapper[4699]: W0226 11:11:47.059032 4699 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 26 11:11:47 crc kubenswrapper[4699]: E0226 11:11:47.059145 4699 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 11:11:47 crc kubenswrapper[4699]: I0226 11:11:47.125703 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:48 crc kubenswrapper[4699]: I0226 11:11:48.126682 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:48 crc kubenswrapper[4699]: I0226 11:11:48.993974 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:48 crc kubenswrapper[4699]: I0226 11:11:48.995430 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:48 crc kubenswrapper[4699]: I0226 11:11:48.995498 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:48 crc kubenswrapper[4699]: I0226 11:11:48.995513 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:48 crc kubenswrapper[4699]: I0226 11:11:48.995560 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:11:49 crc kubenswrapper[4699]: E0226 11:11:49.000185 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 11:11:49 crc kubenswrapper[4699]: E0226 11:11:49.000243 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 11:11:49 crc kubenswrapper[4699]: I0226 11:11:49.123128 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:50 crc kubenswrapper[4699]: I0226 11:11:50.123518 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:50 crc kubenswrapper[4699]: I0226 11:11:50.460749 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:50 crc kubenswrapper[4699]: I0226 11:11:50.461048 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:50 crc kubenswrapper[4699]: I0226 11:11:50.462551 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:50 crc kubenswrapper[4699]: I0226 11:11:50.462600 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:50 crc kubenswrapper[4699]: I0226 11:11:50.462616 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:50 crc kubenswrapper[4699]: I0226 11:11:50.464802 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:50 crc kubenswrapper[4699]: I0226 11:11:50.513603 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:50 crc kubenswrapper[4699]: I0226 11:11:50.515455 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:50 crc kubenswrapper[4699]: I0226 11:11:50.515521 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:50 crc kubenswrapper[4699]: I0226 11:11:50.515534 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:51 crc kubenswrapper[4699]: I0226 11:11:51.123230 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:52 crc kubenswrapper[4699]: I0226 11:11:52.123474 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:52 crc kubenswrapper[4699]: I0226 11:11:52.841387 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:11:52 crc kubenswrapper[4699]: I0226 11:11:52.841525 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:52 crc kubenswrapper[4699]: I0226 11:11:52.842783 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:52 crc kubenswrapper[4699]: I0226 11:11:52.842877 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:52 crc kubenswrapper[4699]: I0226 11:11:52.842902 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:53 crc kubenswrapper[4699]: I0226 11:11:53.123214 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:54 crc kubenswrapper[4699]: I0226 11:11:54.125174 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:54 crc kubenswrapper[4699]: I0226 11:11:54.260725 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:54 crc kubenswrapper[4699]: I0226 11:11:54.262527 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:54 crc kubenswrapper[4699]: I0226 11:11:54.262666 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:54 crc kubenswrapper[4699]: I0226 11:11:54.262756 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:54 crc kubenswrapper[4699]: I0226 11:11:54.263557 4699 scope.go:117] "RemoveContainer" containerID="c2551d40b2ef08b55319c037bd0cce11c3db01d06a454a47ac46c3f5fea58b7f" Feb 26 11:11:54 crc kubenswrapper[4699]: E0226 11:11:54.263844 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:11:55 crc kubenswrapper[4699]: I0226 11:11:55.124261 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:56 crc kubenswrapper[4699]: I0226 11:11:56.000641 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:11:56 crc kubenswrapper[4699]: I0226 11:11:56.002098 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:11:56 crc kubenswrapper[4699]: I0226 11:11:56.002170 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:11:56 crc kubenswrapper[4699]: I0226 11:11:56.002191 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:11:56 crc kubenswrapper[4699]: I0226 11:11:56.002216 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:11:56 crc kubenswrapper[4699]: E0226 11:11:56.006143 4699 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 11:11:56 crc kubenswrapper[4699]: E0226 11:11:56.006408 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 11:11:56 crc kubenswrapper[4699]: I0226 11:11:56.123455 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:56 crc kubenswrapper[4699]: E0226 11:11:56.536216 4699 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:11:57 crc kubenswrapper[4699]: I0226 11:11:57.127813 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:58 crc kubenswrapper[4699]: I0226 11:11:58.123836 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:11:59 crc kubenswrapper[4699]: I0226 11:11:59.122810 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:12:00 crc kubenswrapper[4699]: I0226 11:12:00.125680 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:12:01 crc kubenswrapper[4699]: I0226 11:12:01.123961 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:12:02 crc kubenswrapper[4699]: I0226 11:12:02.123279 4699 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:12:02 crc kubenswrapper[4699]: I0226 11:12:02.239793 4699 csr.go:261] certificate signing request csr-xfmsk is approved, waiting to be issued Feb 26 11:12:02 crc kubenswrapper[4699]: I0226 11:12:02.254966 4699 csr.go:257] certificate signing request csr-xfmsk is issued Feb 26 11:12:02 crc kubenswrapper[4699]: I0226 11:12:02.310741 4699 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 26 11:12:02 crc kubenswrapper[4699]: I0226 11:12:02.516501 4699 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.007173 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.008512 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.008561 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.008581 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.008693 4699 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.022636 4699 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.023158 4699 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.023262 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.028497 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.028541 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.028550 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.028587 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.028599 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:03Z","lastTransitionTime":"2026-02-26T11:12:03Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.042558 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.051347 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.051398 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.051409 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.051437 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.051454 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:03Z","lastTransitionTime":"2026-02-26T11:12:03Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.065969 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.076666 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.076710 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.076721 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.076743 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.076756 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:03Z","lastTransitionTime":"2026-02-26T11:12:03Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.092335 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.101655 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.101714 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.101736 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.102018 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.102037 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:03Z","lastTransitionTime":"2026-02-26T11:12:03Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.114737 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:03Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.114924 4699 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.114949 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.215696 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.259640 4699 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-30 04:18:17.664345148 +0000 UTC Feb 26 11:12:03 crc kubenswrapper[4699]: I0226 11:12:03.259714 4699 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6641h6m14.404635008s for next certificate rotation Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.316557 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.417325 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.517724 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.618280 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.719039 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.819228 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:03 crc kubenswrapper[4699]: E0226 11:12:03.919351 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:04 crc kubenswrapper[4699]: E0226 11:12:04.021476 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:04 crc kubenswrapper[4699]: E0226 11:12:04.122413 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:04 crc kubenswrapper[4699]: E0226 11:12:04.222826 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:04 crc kubenswrapper[4699]: I0226 11:12:04.260534 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:12:04 crc kubenswrapper[4699]: I0226 11:12:04.262409 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:04 crc kubenswrapper[4699]: I0226 11:12:04.262457 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:04 crc kubenswrapper[4699]: I0226 11:12:04.262471 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:04 crc kubenswrapper[4699]: E0226 11:12:04.323767 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:04 crc kubenswrapper[4699]: E0226 11:12:04.424643 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:04 crc kubenswrapper[4699]: E0226 11:12:04.525924 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:04 crc kubenswrapper[4699]: E0226 11:12:04.626428 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:04 crc kubenswrapper[4699]: E0226 11:12:04.727485 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:04 crc kubenswrapper[4699]: E0226 11:12:04.828263 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:04 crc kubenswrapper[4699]: E0226 11:12:04.928803 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:05 crc kubenswrapper[4699]: E0226 11:12:05.029352 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:05 crc kubenswrapper[4699]: E0226 11:12:05.129513 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:05 crc kubenswrapper[4699]: E0226 11:12:05.230381 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:05 crc kubenswrapper[4699]: E0226 11:12:05.331212 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:05 crc kubenswrapper[4699]: E0226 11:12:05.431731 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:05 crc kubenswrapper[4699]: E0226 11:12:05.531929 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:05 crc kubenswrapper[4699]: E0226 11:12:05.633140 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:05 crc kubenswrapper[4699]: E0226 11:12:05.733366 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:05 crc kubenswrapper[4699]: E0226 11:12:05.833982 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:05 crc kubenswrapper[4699]: E0226 11:12:05.934665 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:06 crc kubenswrapper[4699]: E0226 11:12:06.035697 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:06 crc kubenswrapper[4699]: E0226 11:12:06.136648 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:06 crc kubenswrapper[4699]: E0226 11:12:06.237150 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:06 crc kubenswrapper[4699]: I0226 11:12:06.260613 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:12:06 crc kubenswrapper[4699]: I0226 11:12:06.262049 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:06 crc kubenswrapper[4699]: I0226 11:12:06.262092 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:06 crc kubenswrapper[4699]: I0226 11:12:06.262104 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:06 crc kubenswrapper[4699]: I0226 11:12:06.262887 4699 scope.go:117] "RemoveContainer" containerID="c2551d40b2ef08b55319c037bd0cce11c3db01d06a454a47ac46c3f5fea58b7f" Feb 26 11:12:06 crc kubenswrapper[4699]: E0226 11:12:06.338176 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:06 crc kubenswrapper[4699]: E0226 11:12:06.438819 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:06 crc kubenswrapper[4699]: E0226 11:12:06.537099 4699 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:12:06 crc kubenswrapper[4699]: E0226 11:12:06.539031 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:06 crc kubenswrapper[4699]: I0226 11:12:06.567066 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 11:12:06 crc kubenswrapper[4699]: I0226 11:12:06.570692 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0"} Feb 26 11:12:06 crc kubenswrapper[4699]: I0226 11:12:06.570972 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:12:06 crc kubenswrapper[4699]: I0226 11:12:06.573158 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:06 crc kubenswrapper[4699]: I0226 11:12:06.573209 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:06 crc kubenswrapper[4699]: I0226 11:12:06.573219 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:06 crc kubenswrapper[4699]: E0226 11:12:06.639628 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:06 crc kubenswrapper[4699]: E0226 11:12:06.740248 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:06 crc kubenswrapper[4699]: E0226 11:12:06.840845 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:06 crc kubenswrapper[4699]: E0226 11:12:06.941977 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:07 crc kubenswrapper[4699]: E0226 11:12:07.042147 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:07 crc kubenswrapper[4699]: E0226 11:12:07.142980 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:07 crc kubenswrapper[4699]: E0226 11:12:07.243949 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:07 crc kubenswrapper[4699]: E0226 11:12:07.345582 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:07 crc kubenswrapper[4699]: E0226 11:12:07.445977 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:07 crc kubenswrapper[4699]: E0226 11:12:07.546525 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:07 crc kubenswrapper[4699]: E0226 11:12:07.647198 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:07 crc kubenswrapper[4699]: I0226 11:12:07.710528 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:12:07 crc kubenswrapper[4699]: I0226 11:12:07.710714 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:12:07 crc kubenswrapper[4699]: I0226 11:12:07.712228 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:07 crc kubenswrapper[4699]: I0226 11:12:07.712266 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:07 crc kubenswrapper[4699]: I0226 11:12:07.712280 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:07 crc kubenswrapper[4699]: E0226 11:12:07.747606 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:07 crc kubenswrapper[4699]: E0226 11:12:07.848163 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:07 crc kubenswrapper[4699]: E0226 11:12:07.949185 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:08 crc kubenswrapper[4699]: E0226 11:12:08.049865 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:08 crc kubenswrapper[4699]: E0226 11:12:08.150816 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:08 crc kubenswrapper[4699]: E0226 11:12:08.251241 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:08 crc kubenswrapper[4699]: E0226 11:12:08.351990 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:08 crc kubenswrapper[4699]: E0226 11:12:08.453072 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:08 crc kubenswrapper[4699]: E0226 11:12:08.553655 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:08 crc kubenswrapper[4699]: I0226 11:12:08.580775 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 11:12:08 crc kubenswrapper[4699]: I0226 11:12:08.581410 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 11:12:08 crc kubenswrapper[4699]: I0226 11:12:08.583429 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0" exitCode=255 Feb 26 11:12:08 crc kubenswrapper[4699]: I0226 11:12:08.583475 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0"} Feb 26 11:12:08 crc kubenswrapper[4699]: I0226 11:12:08.583529 4699 scope.go:117] "RemoveContainer" containerID="c2551d40b2ef08b55319c037bd0cce11c3db01d06a454a47ac46c3f5fea58b7f" Feb 26 11:12:08 crc kubenswrapper[4699]: I0226 11:12:08.583951 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:12:08 crc kubenswrapper[4699]: I0226 11:12:08.585251 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:08 crc kubenswrapper[4699]: I0226 11:12:08.585302 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:08 crc kubenswrapper[4699]: I0226 11:12:08.585317 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:08 crc kubenswrapper[4699]: I0226 11:12:08.586324 4699 scope.go:117] "RemoveContainer" containerID="eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0" Feb 26 11:12:08 crc kubenswrapper[4699]: E0226 11:12:08.586561 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:12:08 crc kubenswrapper[4699]: E0226 11:12:08.654563 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:08 crc kubenswrapper[4699]: E0226 11:12:08.755661 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:08 crc kubenswrapper[4699]: E0226 11:12:08.856160 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:08 crc kubenswrapper[4699]: E0226 11:12:08.957062 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:08 crc kubenswrapper[4699]: I0226 11:12:08.972230 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:12:09 crc kubenswrapper[4699]: E0226 11:12:09.057800 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:09 crc kubenswrapper[4699]: E0226 11:12:09.158199 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:09 crc kubenswrapper[4699]: E0226 11:12:09.258838 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:09 crc kubenswrapper[4699]: E0226 11:12:09.359363 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:09 crc kubenswrapper[4699]: E0226 11:12:09.460424 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:09 crc kubenswrapper[4699]: E0226 11:12:09.561247 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:09 crc kubenswrapper[4699]: I0226 11:12:09.588394 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 11:12:09 crc kubenswrapper[4699]: I0226 11:12:09.590669 4699 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:12:09 crc kubenswrapper[4699]: I0226 11:12:09.591661 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:09 crc kubenswrapper[4699]: I0226 11:12:09.591701 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:09 crc kubenswrapper[4699]: I0226 11:12:09.591713 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:09 crc kubenswrapper[4699]: I0226 11:12:09.592410 4699 scope.go:117] "RemoveContainer" containerID="eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0" Feb 26 11:12:09 crc kubenswrapper[4699]: E0226 11:12:09.592600 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:12:09 crc kubenswrapper[4699]: E0226 11:12:09.662273 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:09 crc kubenswrapper[4699]: E0226 11:12:09.763185 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:09 crc kubenswrapper[4699]: E0226 11:12:09.863489 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:09 crc kubenswrapper[4699]: E0226 11:12:09.964274 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:10 crc kubenswrapper[4699]: E0226 11:12:10.064894 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:10 crc kubenswrapper[4699]: E0226 11:12:10.165220 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:10 crc kubenswrapper[4699]: E0226 11:12:10.266361 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:10 crc kubenswrapper[4699]: E0226 11:12:10.366554 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:10 crc kubenswrapper[4699]: E0226 11:12:10.467064 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:10 crc kubenswrapper[4699]: E0226 11:12:10.568063 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:10 crc kubenswrapper[4699]: E0226 11:12:10.669397 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:10 crc kubenswrapper[4699]: E0226 11:12:10.770175 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:10 crc kubenswrapper[4699]: E0226 11:12:10.870526 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:10 crc kubenswrapper[4699]: I0226 11:12:10.892312 4699 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 11:12:10 crc kubenswrapper[4699]: E0226 11:12:10.971335 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:11 crc kubenswrapper[4699]: E0226 11:12:11.071967 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:11 crc kubenswrapper[4699]: E0226 11:12:11.172476 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:11 crc kubenswrapper[4699]: E0226 11:12:11.272963 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:11 crc kubenswrapper[4699]: E0226 11:12:11.374046 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:11 crc kubenswrapper[4699]: E0226 11:12:11.475238 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:11 crc kubenswrapper[4699]: E0226 11:12:11.575662 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:11 crc kubenswrapper[4699]: E0226 11:12:11.676647 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:11 crc kubenswrapper[4699]: E0226 11:12:11.777056 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:11 crc kubenswrapper[4699]: E0226 11:12:11.877428 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:11 crc kubenswrapper[4699]: E0226 11:12:11.978653 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:12 crc kubenswrapper[4699]: E0226 11:12:12.079680 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:12 crc kubenswrapper[4699]: E0226 11:12:12.180683 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:12 crc kubenswrapper[4699]: E0226 11:12:12.281306 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:12 crc kubenswrapper[4699]: E0226 11:12:12.381853 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:12 crc kubenswrapper[4699]: E0226 11:12:12.482774 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:12 crc kubenswrapper[4699]: E0226 11:12:12.583955 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:12 crc kubenswrapper[4699]: E0226 11:12:12.684083 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:12 crc kubenswrapper[4699]: E0226 11:12:12.784840 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:12 crc kubenswrapper[4699]: E0226 11:12:12.885859 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:12 crc kubenswrapper[4699]: E0226 11:12:12.987091 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.087888 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.188682 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.240387 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.245537 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.245909 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.245991 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.246090 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.246216 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:13Z","lastTransitionTime":"2026-02-26T11:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.259100 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.265247 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.265315 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.265329 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.265357 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.265370 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:13Z","lastTransitionTime":"2026-02-26T11:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.278571 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.283316 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.283399 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.283441 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.283462 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.283475 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:13Z","lastTransitionTime":"2026-02-26T11:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.295645 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.301984 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.302038 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.302051 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.302072 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:13 crc kubenswrapper[4699]: I0226 11:12:13.302086 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:13Z","lastTransitionTime":"2026-02-26T11:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.314309 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.314477 4699 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.314512 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.415677 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.516815 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.617333 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.717935 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.818669 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:13 crc kubenswrapper[4699]: E0226 11:12:13.919151 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:14 crc kubenswrapper[4699]: E0226 11:12:14.019517 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:14 crc kubenswrapper[4699]: E0226 11:12:14.119778 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:14 crc kubenswrapper[4699]: E0226 11:12:14.220848 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:14 crc kubenswrapper[4699]: E0226 11:12:14.321466 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:14 crc kubenswrapper[4699]: E0226 11:12:14.421839 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:14 crc kubenswrapper[4699]: E0226 11:12:14.523024 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:14 crc kubenswrapper[4699]: E0226 11:12:14.623965 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:14 crc kubenswrapper[4699]: E0226 11:12:14.725054 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:14 crc kubenswrapper[4699]: E0226 11:12:14.825864 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:14 crc kubenswrapper[4699]: E0226 11:12:14.926940 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:15 crc kubenswrapper[4699]: E0226 11:12:15.027417 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:15 crc kubenswrapper[4699]: E0226 11:12:15.128152 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:15 crc kubenswrapper[4699]: E0226 11:12:15.229205 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:15 crc kubenswrapper[4699]: E0226 11:12:15.330039 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:15 crc kubenswrapper[4699]: E0226 11:12:15.431009 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:15 crc kubenswrapper[4699]: E0226 11:12:15.531408 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:15 crc kubenswrapper[4699]: E0226 11:12:15.631753 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:15 crc kubenswrapper[4699]: E0226 11:12:15.732188 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:15 crc kubenswrapper[4699]: E0226 11:12:15.833071 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:15 crc kubenswrapper[4699]: E0226 11:12:15.933762 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:16 crc kubenswrapper[4699]: E0226 11:12:16.034315 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:16 crc kubenswrapper[4699]: E0226 11:12:16.135190 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:16 crc kubenswrapper[4699]: E0226 11:12:16.235824 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:16 crc kubenswrapper[4699]: E0226 11:12:16.336700 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:16 crc kubenswrapper[4699]: E0226 11:12:16.437173 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:16 crc kubenswrapper[4699]: E0226 11:12:16.538351 4699 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:12:16 crc kubenswrapper[4699]: E0226 11:12:16.538410 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:16 crc kubenswrapper[4699]: E0226 11:12:16.639072 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:16 crc kubenswrapper[4699]: E0226 11:12:16.740101 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:16 crc kubenswrapper[4699]: E0226 11:12:16.841284 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:16 crc kubenswrapper[4699]: E0226 11:12:16.941471 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:17 crc kubenswrapper[4699]: E0226 11:12:17.042474 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:17 crc kubenswrapper[4699]: E0226 11:12:17.143204 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:17 crc kubenswrapper[4699]: E0226 11:12:17.243829 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:17 crc kubenswrapper[4699]: E0226 11:12:17.344283 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:17 crc kubenswrapper[4699]: E0226 11:12:17.445361 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:17 crc kubenswrapper[4699]: E0226 11:12:17.545456 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:17 crc kubenswrapper[4699]: E0226 11:12:17.645590 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:17 crc kubenswrapper[4699]: E0226 11:12:17.746680 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:17 crc kubenswrapper[4699]: E0226 11:12:17.847441 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:17 crc kubenswrapper[4699]: E0226 11:12:17.948546 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:18 crc kubenswrapper[4699]: E0226 11:12:18.048903 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:18 crc kubenswrapper[4699]: E0226 11:12:18.149896 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:18 crc kubenswrapper[4699]: E0226 11:12:18.250708 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:18 crc kubenswrapper[4699]: E0226 11:12:18.351368 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:18 crc kubenswrapper[4699]: E0226 11:12:18.452319 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:18 crc kubenswrapper[4699]: E0226 11:12:18.552526 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:18 crc kubenswrapper[4699]: E0226 11:12:18.653379 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:18 crc kubenswrapper[4699]: E0226 11:12:18.753626 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:18 crc kubenswrapper[4699]: E0226 11:12:18.853956 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:18 crc kubenswrapper[4699]: E0226 11:12:18.954980 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:19 crc kubenswrapper[4699]: E0226 11:12:19.055414 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:19 crc kubenswrapper[4699]: E0226 11:12:19.155921 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:19 crc kubenswrapper[4699]: E0226 11:12:19.256405 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:19 crc kubenswrapper[4699]: E0226 11:12:19.357206 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:19 crc kubenswrapper[4699]: E0226 11:12:19.457969 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:19 crc kubenswrapper[4699]: E0226 11:12:19.559248 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:19 crc kubenswrapper[4699]: E0226 11:12:19.659441 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:19 crc kubenswrapper[4699]: E0226 11:12:19.760752 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:19 crc kubenswrapper[4699]: E0226 11:12:19.861353 4699 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:12:19 crc kubenswrapper[4699]: I0226 11:12:19.873322 4699 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 11:12:19 crc kubenswrapper[4699]: I0226 11:12:19.964607 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:19 crc kubenswrapper[4699]: I0226 11:12:19.964676 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:19 crc kubenswrapper[4699]: I0226 11:12:19.964690 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:19 crc kubenswrapper[4699]: I0226 11:12:19.964707 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:19 crc kubenswrapper[4699]: I0226 11:12:19.964720 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:19Z","lastTransitionTime":"2026-02-26T11:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.068363 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.068424 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.068434 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.068453 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.068463 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:20Z","lastTransitionTime":"2026-02-26T11:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.078496 4699 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.130025 4699 apiserver.go:52] "Watching apiserver" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.136739 4699 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.137170 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.137747 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.137816 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.138273 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.138615 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.138354 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.138416 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.139595 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.138309 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.138684 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.142584 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.142585 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.142777 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.143276 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.143614 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.143777 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.143844 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.144021 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.144608 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.171360 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.173304 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.173351 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.173362 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.173378 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.173390 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:20Z","lastTransitionTime":"2026-02-26T11:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.186958 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.210476 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.219979 4699 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.385497 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.385546 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386531 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.385962 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386564 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386608 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386634 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386660 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386682 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386705 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386726 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386746 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386767 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386789 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386849 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386875 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386880 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386897 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386920 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386942 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386969 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.386998 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387023 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387048 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387070 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387092 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387103 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387134 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387158 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387182 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387204 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387359 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387387 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387412 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387436 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387460 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387480 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387500 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387524 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387546 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387568 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387595 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387620 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387642 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387665 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387687 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387708 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387729 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387756 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387777 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387799 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387821 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387845 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387871 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387891 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387913 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387931 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387953 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387977 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388000 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388020 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388043 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388064 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388083 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388104 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388147 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388172 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388194 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388218 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388239 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388263 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388285 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388306 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388326 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388352 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388379 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388403 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388427 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388450 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388471 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388498 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388520 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388541 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388562 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388585 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388609 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388633 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388656 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388679 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388702 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388726 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388745 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388769 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388792 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388824 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388854 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388881 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387221 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387265 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387513 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387552 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387695 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387774 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.387832 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388013 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388189 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388230 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388288 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388301 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388421 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388738 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388820 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388838 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388879 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.388904 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.389353 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.389396 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.389426 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.389453 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.389486 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.389502 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.389538 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.389549 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.389566 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.389577 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:20Z","lastTransitionTime":"2026-02-26T11:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391056 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391044 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.389514 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391196 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391207 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391263 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391364 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391402 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391437 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391466 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391494 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391502 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391523 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391530 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391556 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391551 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391589 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391621 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391655 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391692 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391722 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391751 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391781 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391828 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391881 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391910 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391936 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391968 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391997 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392024 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392052 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392080 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392130 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392162 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392192 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392215 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392240 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392268 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392295 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392324 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392357 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392388 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392419 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392442 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392466 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392495 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392522 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392547 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392576 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392606 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392640 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392668 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392695 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392748 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392770 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392791 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392815 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392836 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392861 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392887 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392913 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392937 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392961 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392985 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393021 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393053 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393077 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393105 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393181 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393218 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393246 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393274 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393302 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393333 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393358 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393457 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393495 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393525 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393552 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393578 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393605 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393635 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393666 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393698 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393727 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393756 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393789 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393817 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393849 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393875 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393901 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393927 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393962 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393989 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394018 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394048 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394081 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394110 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394159 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394188 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394225 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394272 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394423 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394463 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394501 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394547 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394581 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394623 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394659 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394695 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394726 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394759 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394791 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394829 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394868 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394900 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395007 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395025 4699 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395040 4699 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395055 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395069 4699 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395082 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395096 4699 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395110 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395143 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395157 4699 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395172 4699 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395190 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395205 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395218 4699 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395231 4699 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395244 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395258 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395272 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395285 4699 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395298 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395311 4699 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395324 4699 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395341 4699 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395354 4699 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.395368 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.416052 4699 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391586 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391594 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.419010 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391820 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391936 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.391948 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392037 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392212 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.392987 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393372 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393382 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393523 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393655 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393778 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.419250 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394028 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.393485 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394218 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394291 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394331 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394331 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394394 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394796 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.394860 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.396567 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.396582 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.397064 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.397141 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.397149 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.397194 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.397333 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.397603 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.397653 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.397686 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.397728 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.397745 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.398656 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.398791 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.398995 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.399081 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.400003 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.400020 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.400238 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.400340 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.400452 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.400493 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.400559 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.400684 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.400837 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.402268 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.402270 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.402441 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.402459 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.402628 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.402798 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.402864 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.403109 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.403303 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.403436 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.403494 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.403661 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.403686 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.404096 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.404388 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.404638 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:12:20.904512783 +0000 UTC m=+86.715339387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.405372 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.405532 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.406079 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.407599 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.407716 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.407759 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.407828 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.408003 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.408176 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.408275 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.408588 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.408744 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.408842 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.409053 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.409157 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.409300 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.409408 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.409619 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.409910 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.410230 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.410282 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.411278 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.411406 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.411592 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.411905 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.412407 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.412914 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.412982 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.413328 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.412879 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.414587 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.415209 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.415339 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.414980 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.417754 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.417939 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.417980 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.418516 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.419185 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.419534 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.419615 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.419732 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.421690 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.421858 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.422062 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.422226 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.422428 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.423374 4699 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.424065 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.424882 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.424715 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.425382 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.425473 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.425576 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:20.925552761 +0000 UTC m=+86.736379385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.426044 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.426149 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.426789 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.426901 4699 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.427266 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:20.927243917 +0000 UTC m=+86.738070541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.428326 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.428677 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.492565 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.492857 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.493294 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.493369 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.493654 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.493775 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.494018 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.494157 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.494173 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.494232 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.494284 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.494291 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.494320 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.494415 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.495056 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.495101 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.495148 4699 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.495178 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.495407 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.495599 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.497201 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.497306 4699 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.495714 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.496397 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.496401 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.496527 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.496835 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.496934 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:20.996886048 +0000 UTC m=+86.807712482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.497873 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.497980 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.497998 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.498301 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:20.998234565 +0000 UTC m=+86.809061119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.499023 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.497710 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.496963 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.498375 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.498398 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.499283 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501196 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501251 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501265 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501277 4699 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501288 4699 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501303 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501315 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501327 4699 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501337 4699 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501351 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501362 4699 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501373 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501384 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501398 4699 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501410 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501421 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501439 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501450 4699 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501460 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501470 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501483 4699 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501493 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501504 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501515 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501533 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501543 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501555 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501568 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501578 4699 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501587 4699 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501596 4699 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501608 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501617 4699 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501630 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501640 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501652 4699 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501662 4699 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501671 4699 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501681 4699 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501694 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501721 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501732 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501746 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501755 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501764 4699 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501775 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501788 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501797 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501806 4699 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501817 4699 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501828 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501837 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501848 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501861 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501870 4699 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501880 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501892 4699 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501904 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501914 4699 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501928 4699 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501940 4699 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501956 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501969 4699 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501981 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.501994 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502008 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502017 4699 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502027 4699 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502040 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502050 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502062 4699 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502072 4699 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502083 4699 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502092 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502102 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502132 4699 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502144 4699 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502154 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502163 4699 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502175 4699 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502184 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502235 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502249 4699 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502266 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502279 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502288 4699 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502346 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502391 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502410 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502437 4699 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502480 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502495 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502510 4699 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502528 4699 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502548 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502561 4699 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502576 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502594 4699 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502609 4699 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502624 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502641 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502655 4699 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502670 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502684 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502702 4699 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502715 4699 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502728 4699 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502742 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502759 4699 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502773 4699 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502790 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502804 4699 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502820 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502863 4699 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502880 4699 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502898 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502912 4699 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502927 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502942 4699 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502960 4699 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502975 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.502988 4699 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503001 4699 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503019 4699 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503033 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503047 4699 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503064 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503078 4699 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503092 4699 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503179 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503195 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503208 4699 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503222 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503234 4699 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503251 4699 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503264 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503276 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503289 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503306 4699 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503319 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503705 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.503896 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.505128 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.505818 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.505813 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.505648 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.506168 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.506294 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.506365 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.506589 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.506682 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.506794 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.506786 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.506818 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.506800 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.507392 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.507639 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.507781 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.508069 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.508004 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.508128 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.508248 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.508534 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.508563 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.508995 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.510440 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.510607 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.510722 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.510838 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.510921 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:20Z","lastTransitionTime":"2026-02-26T11:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.510490 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.510947 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.515327 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.524329 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.524824 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.536663 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.538264 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.603945 4699 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.603986 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604001 4699 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604012 4699 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604024 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604036 4699 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604047 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604058 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604068 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604077 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604086 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604095 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604103 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604132 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604152 4699 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604163 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604173 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604196 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604204 4699 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604213 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604221 4699 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604230 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604238 4699 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604245 4699 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604253 4699 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604262 4699 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604270 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604279 4699 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.604287 4699 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.613568 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.613623 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.613635 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.613653 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.613665 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:20Z","lastTransitionTime":"2026-02-26T11:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.716384 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.716426 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.716437 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.716453 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.716464 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:20Z","lastTransitionTime":"2026-02-26T11:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.763097 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.770899 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:12:20 crc kubenswrapper[4699]: W0226 11:12:20.786696 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-ea395e839fe793a72c8ec8cb739811a6921caac61fd8ceef6e30ca19eb7732a6 WatchSource:0}: Error finding container ea395e839fe793a72c8ec8cb739811a6921caac61fd8ceef6e30ca19eb7732a6: Status 404 returned error can't find the container with id ea395e839fe793a72c8ec8cb739811a6921caac61fd8ceef6e30ca19eb7732a6 Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.789198 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.819191 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.819765 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.819802 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.819827 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.819845 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:20Z","lastTransitionTime":"2026-02-26T11:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.907213 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:20 crc kubenswrapper[4699]: E0226 11:12:20.907445 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:12:21.90739545 +0000 UTC m=+87.718221884 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.923164 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.923222 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.923237 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.923255 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:20 crc kubenswrapper[4699]: I0226 11:12:20.923267 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:20Z","lastTransitionTime":"2026-02-26T11:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.008371 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.008446 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.008473 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.008503 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.008588 4699 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.008647 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:22.008629868 +0000 UTC m=+87.819456302 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.009072 4699 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.009136 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:22.009107791 +0000 UTC m=+87.819934225 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.009209 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.009225 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.009237 4699 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.009267 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:22.009258385 +0000 UTC m=+87.820084819 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.009322 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.009333 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.009342 4699 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.009366 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:22.009359028 +0000 UTC m=+87.820185462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.027094 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.027186 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.027234 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.027260 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.027273 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:21Z","lastTransitionTime":"2026-02-26T11:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.130148 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.130212 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.130224 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.130245 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.130256 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:21Z","lastTransitionTime":"2026-02-26T11:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.233030 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.233071 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.233082 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.233102 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.233130 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:21Z","lastTransitionTime":"2026-02-26T11:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.336174 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.336233 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.336244 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.336265 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.336287 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:21Z","lastTransitionTime":"2026-02-26T11:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.439235 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.439307 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.439323 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.439348 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.439361 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:21Z","lastTransitionTime":"2026-02-26T11:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.541753 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.541821 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.541836 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.541858 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.541870 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:21Z","lastTransitionTime":"2026-02-26T11:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.624897 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.624987 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"133a8e69467e5c97f3a135ab21c364e50741580a1dc47ad478db24e222d78eb9"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.627577 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.627639 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ea395e839fe793a72c8ec8cb739811a6921caac61fd8ceef6e30ca19eb7732a6"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.629271 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"34e967ea2bb3904d502b9c8d4ce015eed4342b553e45c76324f0bb014f3d76fe"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.643385 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.644690 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.644766 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.644782 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.644810 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.644834 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:21Z","lastTransitionTime":"2026-02-26T11:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.656874 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.670102 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.685917 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.700629 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.713601 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.747221 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.747258 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.747267 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.747281 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.747291 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:21Z","lastTransitionTime":"2026-02-26T11:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.849802 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.849844 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.849852 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.849868 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.849878 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:21Z","lastTransitionTime":"2026-02-26T11:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.913895 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:21 crc kubenswrapper[4699]: E0226 11:12:21.914197 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:12:23.914160981 +0000 UTC m=+89.724987435 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.951599 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.951689 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.951706 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.951723 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:21 crc kubenswrapper[4699]: I0226 11:12:21.951734 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:21Z","lastTransitionTime":"2026-02-26T11:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.014801 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.014872 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.014900 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.014932 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015013 4699 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015022 4699 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015044 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015073 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015090 4699 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015102 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:24.015080619 +0000 UTC m=+89.825907073 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015154 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:24.01511181 +0000 UTC m=+89.825938254 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015171 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:24.015163672 +0000 UTC m=+89.825990126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015244 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015296 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015314 4699 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.015412 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:24.015383708 +0000 UTC m=+89.826210312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.054071 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.054163 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.054186 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.054205 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.054217 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:22Z","lastTransitionTime":"2026-02-26T11:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.157168 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.157238 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.157249 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.157272 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.157285 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:22Z","lastTransitionTime":"2026-02-26T11:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.259977 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.260002 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.260255 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.260520 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.260583 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.260927 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.260949 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.260621 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.260972 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:22 crc kubenswrapper[4699]: E0226 11:12:22.260813 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.261040 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:22Z","lastTransitionTime":"2026-02-26T11:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.264748 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.265598 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.267467 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.268676 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.270046 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.270763 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.272738 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.274179 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.275014 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.276182 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.276922 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.278390 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.279023 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.279678 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.280768 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.281594 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.282715 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.283250 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.283977 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.285322 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.285931 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.287515 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.288038 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.289267 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.290757 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.291566 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.292929 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.293464 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.294677 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.295416 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.296524 4699 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.296651 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.299218 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.300365 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.301908 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.304468 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.305231 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.306591 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.307565 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.309605 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.310401 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.311651 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.312333 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.313561 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.314030 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.314949 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.315912 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.317515 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.318049 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.318978 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.319562 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.320766 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.321527 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.321995 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.363711 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.363763 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.363774 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.363792 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.363804 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:22Z","lastTransitionTime":"2026-02-26T11:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.465466 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.465514 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.465523 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.465536 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.465545 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:22Z","lastTransitionTime":"2026-02-26T11:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.567421 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.567454 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.567464 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.567477 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.567487 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:22Z","lastTransitionTime":"2026-02-26T11:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.634380 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2"} Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.647824 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.663167 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.669888 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.669940 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.669953 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.669968 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.669980 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:22Z","lastTransitionTime":"2026-02-26T11:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.680044 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.695132 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.708778 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.726080 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.773285 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.773325 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.773337 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.773352 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.773365 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:22Z","lastTransitionTime":"2026-02-26T11:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.876934 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.877001 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.877017 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.877042 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.877056 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:22Z","lastTransitionTime":"2026-02-26T11:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.980204 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.980277 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.980292 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.980315 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:22 crc kubenswrapper[4699]: I0226 11:12:22.980328 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:22Z","lastTransitionTime":"2026-02-26T11:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.086394 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.086446 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.086471 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.086491 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.086505 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.189544 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.189599 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.189612 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.189631 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.189644 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.292976 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.293020 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.293039 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.293057 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.293072 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.395651 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.395714 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.395725 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.395740 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.395751 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.406241 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.406328 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.406345 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.406372 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.406425 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: E0226 11:12:23.421579 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.429028 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.430294 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.430324 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.430346 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.430359 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: E0226 11:12:23.448394 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.454954 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.455015 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.455029 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.455049 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.455061 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: E0226 11:12:23.471728 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.477723 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.477806 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.477828 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.477854 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.477869 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: E0226 11:12:23.494963 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.501038 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.501099 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.501111 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.501151 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.501165 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: E0226 11:12:23.517171 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:23 crc kubenswrapper[4699]: E0226 11:12:23.517352 4699 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.520400 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.520454 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.520468 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.520488 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.520503 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.623416 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.623471 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.623494 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.623511 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.623524 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.727208 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.727267 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.727281 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.727308 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.727324 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.830359 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.830508 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.830522 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.830541 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.830556 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.931551 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:23 crc kubenswrapper[4699]: E0226 11:12:23.931930 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:12:27.931897119 +0000 UTC m=+93.742723563 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.932852 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.933612 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.933671 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.933697 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:23 crc kubenswrapper[4699]: I0226 11:12:23.933712 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:23Z","lastTransitionTime":"2026-02-26T11:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.033054 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.033238 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033413 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033452 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.033457 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033480 4699 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.033509 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033546 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:28.033526597 +0000 UTC m=+93.844353221 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033619 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033655 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033666 4699 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033725 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:28.033706222 +0000 UTC m=+93.844532646 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033739 4699 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033774 4699 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033801 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:28.033794424 +0000 UTC m=+93.844620858 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.033836 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:28.033807624 +0000 UTC m=+93.844634248 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.037174 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.037215 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.037228 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.037249 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.037265 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:24Z","lastTransitionTime":"2026-02-26T11:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.139949 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.140043 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.140061 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.140087 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.140103 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:24Z","lastTransitionTime":"2026-02-26T11:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.241880 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.241930 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.241944 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.241960 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.241971 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:24Z","lastTransitionTime":"2026-02-26T11:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.260481 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.260481 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.260577 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.260676 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.260751 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.261304 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.275775 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.276173 4699 scope.go:117] "RemoveContainer" containerID="eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0" Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.276364 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.344477 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.344536 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.344548 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.344566 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.344579 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:24Z","lastTransitionTime":"2026-02-26T11:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.453715 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.454452 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.454714 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.455372 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.455415 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:24Z","lastTransitionTime":"2026-02-26T11:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.558547 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.558609 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.558621 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.558641 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.558654 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:24Z","lastTransitionTime":"2026-02-26T11:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.641970 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70"} Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.642605 4699 scope.go:117] "RemoveContainer" containerID="eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0" Feb 26 11:12:24 crc kubenswrapper[4699]: E0226 11:12:24.642786 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.668152 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.668186 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.668193 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.668205 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.668214 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:24Z","lastTransitionTime":"2026-02-26T11:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.668104 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.687084 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.702999 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.721772 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.739581 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.755882 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.769401 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.771010 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.771042 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.771218 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.771236 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.771245 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:24Z","lastTransitionTime":"2026-02-26T11:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.874502 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.874542 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.874553 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.874569 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.874581 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:24Z","lastTransitionTime":"2026-02-26T11:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.977134 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.977162 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.977171 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.977184 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:24 crc kubenswrapper[4699]: I0226 11:12:24.977194 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:24Z","lastTransitionTime":"2026-02-26T11:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.079300 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.079358 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.079373 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.079390 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.079403 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:25Z","lastTransitionTime":"2026-02-26T11:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.182181 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.182243 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.182256 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.182272 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.182307 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:25Z","lastTransitionTime":"2026-02-26T11:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.284882 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.285236 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.285345 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.285436 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.285517 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:25Z","lastTransitionTime":"2026-02-26T11:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.388448 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.388485 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.388493 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.388506 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.388514 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:25Z","lastTransitionTime":"2026-02-26T11:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.490739 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.491069 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.491172 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.491245 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.491314 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:25Z","lastTransitionTime":"2026-02-26T11:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.594142 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.594727 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.594801 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.594870 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.594940 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:25Z","lastTransitionTime":"2026-02-26T11:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.702253 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.702310 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.702326 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.702346 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.702366 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:25Z","lastTransitionTime":"2026-02-26T11:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.804892 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.804959 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.804973 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.804990 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.805002 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:25Z","lastTransitionTime":"2026-02-26T11:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.908345 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.908733 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.908868 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.908966 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:25 crc kubenswrapper[4699]: I0226 11:12:25.909067 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:25Z","lastTransitionTime":"2026-02-26T11:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.011558 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.011640 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.011654 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.011672 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.011684 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:26Z","lastTransitionTime":"2026-02-26T11:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.114716 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.114754 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.114767 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.114784 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.114796 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:26Z","lastTransitionTime":"2026-02-26T11:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.217537 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.217827 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.217911 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.217996 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.218079 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:26Z","lastTransitionTime":"2026-02-26T11:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.260733 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.260806 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.260842 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:26 crc kubenswrapper[4699]: E0226 11:12:26.260997 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:26 crc kubenswrapper[4699]: E0226 11:12:26.261075 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:26 crc kubenswrapper[4699]: E0226 11:12:26.261219 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.281163 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.296493 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.314180 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.320894 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.321240 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.321349 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.321437 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.321512 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:26Z","lastTransitionTime":"2026-02-26T11:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.331414 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.349954 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.364223 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.379537 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.425192 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.425243 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.425259 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.425276 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.425288 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:26Z","lastTransitionTime":"2026-02-26T11:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.528322 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.528371 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.528382 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.528405 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.528416 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:26Z","lastTransitionTime":"2026-02-26T11:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.631175 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.631223 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.631237 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.631251 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.631263 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:26Z","lastTransitionTime":"2026-02-26T11:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.736478 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.736533 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.736554 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.736575 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.736590 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:26Z","lastTransitionTime":"2026-02-26T11:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.839765 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.839798 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.839807 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.839823 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.839833 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:26Z","lastTransitionTime":"2026-02-26T11:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.943444 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.943538 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.943552 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.943568 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:26 crc kubenswrapper[4699]: I0226 11:12:26.943580 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:26Z","lastTransitionTime":"2026-02-26T11:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.046226 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.046273 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.046284 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.046300 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.046313 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:27Z","lastTransitionTime":"2026-02-26T11:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.149921 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.149972 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.149983 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.150002 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.150014 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:27Z","lastTransitionTime":"2026-02-26T11:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.197922 4699 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.253371 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.253422 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.253434 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.253452 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.253464 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:27Z","lastTransitionTime":"2026-02-26T11:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.356637 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.356675 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.356685 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.356700 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.356712 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:27Z","lastTransitionTime":"2026-02-26T11:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.458666 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.458703 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.458714 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.458731 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.458742 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:27Z","lastTransitionTime":"2026-02-26T11:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.561581 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.561630 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.561642 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.561658 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.561671 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:27Z","lastTransitionTime":"2026-02-26T11:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.663730 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.663784 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.663798 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.663822 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.663834 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:27Z","lastTransitionTime":"2026-02-26T11:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.766622 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.766677 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.766688 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.766706 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.766720 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:27Z","lastTransitionTime":"2026-02-26T11:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.870815 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.870870 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.870882 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.870899 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.870910 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:27Z","lastTransitionTime":"2026-02-26T11:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.969282 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:27 crc kubenswrapper[4699]: E0226 11:12:27.969518 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:12:35.969495351 +0000 UTC m=+101.780321785 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.973238 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.973355 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.973370 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.973395 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:27 crc kubenswrapper[4699]: I0226 11:12:27.973408 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:27Z","lastTransitionTime":"2026-02-26T11:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.070562 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.070654 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.070682 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.070712 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.070838 4699 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.070849 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.070888 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.070897 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:36.070878933 +0000 UTC m=+101.881705367 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.070902 4699 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.070973 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:36.070949435 +0000 UTC m=+101.881775939 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.071052 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.071103 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.071150 4699 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.071051 4699 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.071216 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:36.071195281 +0000 UTC m=+101.882021765 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.071241 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:36.071228862 +0000 UTC m=+101.882055406 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.077049 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.077105 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.077153 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.077168 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.077179 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:28Z","lastTransitionTime":"2026-02-26T11:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.179656 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.179724 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.179740 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.179765 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.179783 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:28Z","lastTransitionTime":"2026-02-26T11:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.260324 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.260399 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.260356 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.260491 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.260547 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:28 crc kubenswrapper[4699]: E0226 11:12:28.260599 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.281968 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.282003 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.282014 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.282030 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.282041 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:28Z","lastTransitionTime":"2026-02-26T11:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.384439 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.384740 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.384814 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.384875 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.384936 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:28Z","lastTransitionTime":"2026-02-26T11:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.488525 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.488592 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.488606 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.488628 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.488640 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:28Z","lastTransitionTime":"2026-02-26T11:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.591274 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.591530 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.591595 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.591672 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.591736 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:28Z","lastTransitionTime":"2026-02-26T11:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.696443 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.696492 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.696502 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.696521 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.696536 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:28Z","lastTransitionTime":"2026-02-26T11:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.799321 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.799377 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.799405 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.799425 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.799442 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:28Z","lastTransitionTime":"2026-02-26T11:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.904206 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.904287 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.904307 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.904518 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:28 crc kubenswrapper[4699]: I0226 11:12:28.904559 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:28Z","lastTransitionTime":"2026-02-26T11:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.008549 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.008597 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.008610 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.008630 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.008642 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:29Z","lastTransitionTime":"2026-02-26T11:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.112629 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.112692 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.112704 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.112723 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.112733 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:29Z","lastTransitionTime":"2026-02-26T11:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.216616 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.216684 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.216697 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.216710 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.216722 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:29Z","lastTransitionTime":"2026-02-26T11:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.319101 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.319187 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.319200 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.319218 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.319235 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:29Z","lastTransitionTime":"2026-02-26T11:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.421752 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.421813 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.421825 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.421844 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.421857 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:29Z","lastTransitionTime":"2026-02-26T11:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.525136 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.525185 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.525195 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.525211 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.525223 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:29Z","lastTransitionTime":"2026-02-26T11:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.627894 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.627937 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.627947 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.627963 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.627974 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:29Z","lastTransitionTime":"2026-02-26T11:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.730284 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.730316 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.730324 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.730338 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.730347 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:29Z","lastTransitionTime":"2026-02-26T11:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.832897 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.832940 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.832949 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.832966 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.832976 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:29Z","lastTransitionTime":"2026-02-26T11:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.935509 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.935551 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.935562 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.935578 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:29 crc kubenswrapper[4699]: I0226 11:12:29.935588 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:29Z","lastTransitionTime":"2026-02-26T11:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.038532 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.038581 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.038594 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.038610 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.038624 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:30Z","lastTransitionTime":"2026-02-26T11:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.141140 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.141176 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.141186 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.141201 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.141213 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:30Z","lastTransitionTime":"2026-02-26T11:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.244403 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.244444 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.244461 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.244480 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.244492 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:30Z","lastTransitionTime":"2026-02-26T11:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.261380 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.261445 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:30 crc kubenswrapper[4699]: E0226 11:12:30.261556 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:30 crc kubenswrapper[4699]: E0226 11:12:30.261673 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.261781 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:30 crc kubenswrapper[4699]: E0226 11:12:30.261857 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.346886 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.346926 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.346940 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.346957 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.346969 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:30Z","lastTransitionTime":"2026-02-26T11:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.449915 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.449958 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.449969 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.449983 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.449994 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:30Z","lastTransitionTime":"2026-02-26T11:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.552926 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.552964 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.552974 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.552988 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.552998 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:30Z","lastTransitionTime":"2026-02-26T11:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.655504 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.655533 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.655542 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.655555 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.655564 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:30Z","lastTransitionTime":"2026-02-26T11:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.757963 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.757999 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.758009 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.758022 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.758032 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:30Z","lastTransitionTime":"2026-02-26T11:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.860570 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.860610 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.860621 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.860636 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.860648 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:30Z","lastTransitionTime":"2026-02-26T11:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.963041 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.963081 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.963093 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.963110 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:30 crc kubenswrapper[4699]: I0226 11:12:30.963143 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:30Z","lastTransitionTime":"2026-02-26T11:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.065384 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.065425 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.065440 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.065459 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.065472 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:31Z","lastTransitionTime":"2026-02-26T11:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.167866 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.167910 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.167921 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.167938 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.167951 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:31Z","lastTransitionTime":"2026-02-26T11:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.270842 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.270878 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.270889 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.270904 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.270915 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:31Z","lastTransitionTime":"2026-02-26T11:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.373544 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.373582 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.373590 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.373610 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.373619 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:31Z","lastTransitionTime":"2026-02-26T11:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.476641 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.476710 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.476731 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.476754 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.476768 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:31Z","lastTransitionTime":"2026-02-26T11:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.579416 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.579446 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.579457 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.579474 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.579486 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:31Z","lastTransitionTime":"2026-02-26T11:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.681813 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.681858 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.681866 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.681880 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.681890 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:31Z","lastTransitionTime":"2026-02-26T11:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.784585 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.784645 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.784659 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.784676 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.784687 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:31Z","lastTransitionTime":"2026-02-26T11:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.886956 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.887010 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.887021 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.887037 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.887050 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:31Z","lastTransitionTime":"2026-02-26T11:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.989661 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.990515 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.990528 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.990542 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:31 crc kubenswrapper[4699]: I0226 11:12:31.990552 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:31Z","lastTransitionTime":"2026-02-26T11:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.092150 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.092193 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.092204 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.092221 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.092232 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:32Z","lastTransitionTime":"2026-02-26T11:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.194090 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.194163 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.194172 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.194188 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.194198 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:32Z","lastTransitionTime":"2026-02-26T11:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.260385 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.260422 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:32 crc kubenswrapper[4699]: E0226 11:12:32.260552 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.260570 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:32 crc kubenswrapper[4699]: E0226 11:12:32.260720 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:32 crc kubenswrapper[4699]: E0226 11:12:32.260773 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.299973 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.300029 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.300050 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.300070 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.300086 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:32Z","lastTransitionTime":"2026-02-26T11:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.402359 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.402404 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.402418 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.402433 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.402443 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:32Z","lastTransitionTime":"2026-02-26T11:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.504246 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.504286 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.504299 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.504319 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.504333 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:32Z","lastTransitionTime":"2026-02-26T11:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.606618 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.606944 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.607055 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.607205 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.607420 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:32Z","lastTransitionTime":"2026-02-26T11:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.710586 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.710640 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.710651 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.710671 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.710683 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:32Z","lastTransitionTime":"2026-02-26T11:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.813662 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.813733 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.813747 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.813767 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.813782 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:32Z","lastTransitionTime":"2026-02-26T11:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.916561 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.916614 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.916626 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.916647 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:32 crc kubenswrapper[4699]: I0226 11:12:32.916662 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:32Z","lastTransitionTime":"2026-02-26T11:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.018600 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.018641 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.018652 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.018670 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.018681 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.120708 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.120748 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.120764 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.120780 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.120791 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.222713 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.222745 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.222753 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.222764 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.222774 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.324689 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.324763 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.324776 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.324792 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.324804 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.426568 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.426616 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.426629 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.426645 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.426658 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.528972 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.529004 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.529013 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.529026 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.529037 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.543618 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.543661 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.543670 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.543686 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.543697 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: E0226 11:12:33.557350 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:33Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.561665 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.561716 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.561727 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.561747 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.561763 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: E0226 11:12:33.574840 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:33Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.578293 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.578327 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.578338 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.578355 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.578367 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: E0226 11:12:33.590460 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:33Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.594725 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.594769 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.594780 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.594797 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.594809 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: E0226 11:12:33.609604 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:33Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.613869 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.613931 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.613944 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.613963 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.613977 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: E0226 11:12:33.627868 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:33Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:33 crc kubenswrapper[4699]: E0226 11:12:33.628042 4699 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.631317 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.631348 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.631356 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.631368 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.631378 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.735105 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.735188 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.735200 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.735221 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.735232 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.837991 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.838075 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.838090 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.838107 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.838141 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.940476 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.940520 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.940532 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.940548 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:33 crc kubenswrapper[4699]: I0226 11:12:33.940560 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:33Z","lastTransitionTime":"2026-02-26T11:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.043106 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.043178 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.043191 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.043210 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.043223 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:34Z","lastTransitionTime":"2026-02-26T11:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.147025 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.147222 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.147235 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.147251 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.147263 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:34Z","lastTransitionTime":"2026-02-26T11:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.249704 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.249748 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.249758 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.249772 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.249783 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:34Z","lastTransitionTime":"2026-02-26T11:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.260349 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.260413 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.260380 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:34 crc kubenswrapper[4699]: E0226 11:12:34.260499 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:34 crc kubenswrapper[4699]: E0226 11:12:34.260565 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:34 crc kubenswrapper[4699]: E0226 11:12:34.260742 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.352723 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.352779 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.352789 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.352809 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.352821 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:34Z","lastTransitionTime":"2026-02-26T11:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.455848 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.455920 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.455937 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.455965 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.455982 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:34Z","lastTransitionTime":"2026-02-26T11:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.559368 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.559417 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.559436 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.559456 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.559467 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:34Z","lastTransitionTime":"2026-02-26T11:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.662966 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.663027 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.663039 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.663060 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.663074 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:34Z","lastTransitionTime":"2026-02-26T11:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.766719 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.766756 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.766766 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.766780 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.766792 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:34Z","lastTransitionTime":"2026-02-26T11:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.869756 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.869795 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.869803 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.869817 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.869826 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:34Z","lastTransitionTime":"2026-02-26T11:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.972370 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.972418 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.972428 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.972443 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:34 crc kubenswrapper[4699]: I0226 11:12:34.972453 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:34Z","lastTransitionTime":"2026-02-26T11:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.073878 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.073906 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.073915 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.073927 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.073937 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:35Z","lastTransitionTime":"2026-02-26T11:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.176291 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.176343 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.176361 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.176378 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.176390 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:35Z","lastTransitionTime":"2026-02-26T11:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.278761 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.278813 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.278821 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.278834 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.278844 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:35Z","lastTransitionTime":"2026-02-26T11:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.381378 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.381426 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.381438 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.381454 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.381467 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:35Z","lastTransitionTime":"2026-02-26T11:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.483404 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.483463 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.483480 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.483498 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.483514 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:35Z","lastTransitionTime":"2026-02-26T11:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.586191 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.586228 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.586236 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.586252 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.586264 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:35Z","lastTransitionTime":"2026-02-26T11:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.688727 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.688756 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.688766 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.688778 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.688808 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:35Z","lastTransitionTime":"2026-02-26T11:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.791252 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.791289 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.791305 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.791321 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.791334 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:35Z","lastTransitionTime":"2026-02-26T11:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.894398 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.894434 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.894448 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.894465 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.894478 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:35Z","lastTransitionTime":"2026-02-26T11:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.997570 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.997603 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.997612 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.997626 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:35 crc kubenswrapper[4699]: I0226 11:12:35.997635 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:35Z","lastTransitionTime":"2026-02-26T11:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.058352 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.058543 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:12:52.058524546 +0000 UTC m=+117.869350980 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.100608 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.100643 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.100653 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.100666 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.100675 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:36Z","lastTransitionTime":"2026-02-26T11:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.159839 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.159884 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.159905 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.159922 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160035 4699 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160096 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:52.160077902 +0000 UTC m=+117.970904336 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160306 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160334 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160348 4699 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160402 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:52.16039258 +0000 UTC m=+117.971219014 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160422 4699 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160635 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:52.160609606 +0000 UTC m=+117.971436130 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160482 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160671 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160687 4699 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.160724 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:52.160716179 +0000 UTC m=+117.971542743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.203663 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.203709 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.203718 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.203732 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.203744 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:36Z","lastTransitionTime":"2026-02-26T11:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.260345 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.260408 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.260486 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.260365 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.260770 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:36 crc kubenswrapper[4699]: E0226 11:12:36.260887 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.274406 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:36Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.286965 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:36Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.302152 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:36Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.305538 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.305569 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.305577 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.305589 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.305598 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:36Z","lastTransitionTime":"2026-02-26T11:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.316412 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:36Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.330609 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:36Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.344046 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:36Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.356171 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:36Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.408180 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.408214 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.408224 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.408241 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.408254 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:36Z","lastTransitionTime":"2026-02-26T11:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.510269 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.510313 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.510325 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.510340 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.510352 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:36Z","lastTransitionTime":"2026-02-26T11:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.612196 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.612239 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.612250 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.612266 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.612280 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:36Z","lastTransitionTime":"2026-02-26T11:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.716345 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.716402 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.716413 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.716432 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.716444 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:36Z","lastTransitionTime":"2026-02-26T11:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.819882 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.819921 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.819930 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.819946 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.819960 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:36Z","lastTransitionTime":"2026-02-26T11:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.923267 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.923359 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.923384 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.923416 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:36 crc kubenswrapper[4699]: I0226 11:12:36.923438 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:36Z","lastTransitionTime":"2026-02-26T11:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.025618 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.025672 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.025684 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.025702 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.025715 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:37Z","lastTransitionTime":"2026-02-26T11:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.128569 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.128631 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.128644 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.128664 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.128680 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:37Z","lastTransitionTime":"2026-02-26T11:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.231414 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.231456 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.231467 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.231484 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.231497 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:37Z","lastTransitionTime":"2026-02-26T11:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.333996 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.334045 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.334056 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.334074 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.334085 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:37Z","lastTransitionTime":"2026-02-26T11:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.436315 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.436362 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.436370 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.436385 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.436395 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:37Z","lastTransitionTime":"2026-02-26T11:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.539024 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.539071 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.539083 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.539145 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.539158 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:37Z","lastTransitionTime":"2026-02-26T11:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.642633 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.642683 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.642696 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.642715 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.642731 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:37Z","lastTransitionTime":"2026-02-26T11:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.744594 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.744843 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.744907 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.744966 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.745021 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:37Z","lastTransitionTime":"2026-02-26T11:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.847580 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.847635 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.847652 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.847673 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.847686 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:37Z","lastTransitionTime":"2026-02-26T11:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.950396 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.950927 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.950994 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.951089 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:37 crc kubenswrapper[4699]: I0226 11:12:37.951195 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:37Z","lastTransitionTime":"2026-02-26T11:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.054431 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.054663 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.054753 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.054812 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.054904 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:38Z","lastTransitionTime":"2026-02-26T11:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.157778 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.158144 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.158239 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.158428 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.158524 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:38Z","lastTransitionTime":"2026-02-26T11:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.260083 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:38 crc kubenswrapper[4699]: E0226 11:12:38.260308 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.260342 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:38 crc kubenswrapper[4699]: E0226 11:12:38.260462 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.260105 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:38 crc kubenswrapper[4699]: E0226 11:12:38.260710 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.261602 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.262022 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.262189 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.262295 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.262386 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:38Z","lastTransitionTime":"2026-02-26T11:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.364927 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.364960 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.364969 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.364983 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.364992 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:38Z","lastTransitionTime":"2026-02-26T11:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.466953 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.466990 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.467001 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.467018 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.467030 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:38Z","lastTransitionTime":"2026-02-26T11:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.570230 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.570334 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.570348 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.570372 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.570394 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:38Z","lastTransitionTime":"2026-02-26T11:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.672536 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.672802 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.672863 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.672930 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.673037 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:38Z","lastTransitionTime":"2026-02-26T11:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.775384 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.775578 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.775595 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.775619 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.775632 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:38Z","lastTransitionTime":"2026-02-26T11:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.878470 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.878516 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.878528 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.878543 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.878555 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:38Z","lastTransitionTime":"2026-02-26T11:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.980474 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.980516 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.980527 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.980541 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:38 crc kubenswrapper[4699]: I0226 11:12:38.980552 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:38Z","lastTransitionTime":"2026-02-26T11:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.082719 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.082757 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.082769 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.082783 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.082794 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:39Z","lastTransitionTime":"2026-02-26T11:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.185585 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.185643 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.185654 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.185671 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.185683 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:39Z","lastTransitionTime":"2026-02-26T11:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.287597 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.287643 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.287652 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.287670 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.287681 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:39Z","lastTransitionTime":"2026-02-26T11:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.390062 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.390353 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.390438 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.390502 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.390615 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:39Z","lastTransitionTime":"2026-02-26T11:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.493453 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.493496 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.493507 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.493522 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.493535 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:39Z","lastTransitionTime":"2026-02-26T11:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.596277 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.596327 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.596337 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.596354 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.596365 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:39Z","lastTransitionTime":"2026-02-26T11:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.698330 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.698637 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.698720 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.698818 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.698895 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:39Z","lastTransitionTime":"2026-02-26T11:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.801290 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.801323 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.801335 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.801351 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.801363 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:39Z","lastTransitionTime":"2026-02-26T11:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.903361 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.903401 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.903412 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.903427 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:39 crc kubenswrapper[4699]: I0226 11:12:39.903440 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:39Z","lastTransitionTime":"2026-02-26T11:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.005697 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.005755 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.005769 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.005785 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.005797 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:40Z","lastTransitionTime":"2026-02-26T11:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.108911 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.109036 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.109047 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.109064 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.109091 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:40Z","lastTransitionTime":"2026-02-26T11:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.211595 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.211627 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.211639 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.211653 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.211665 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:40Z","lastTransitionTime":"2026-02-26T11:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.260344 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.260841 4699 scope.go:117] "RemoveContainer" containerID="eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0" Feb 26 11:12:40 crc kubenswrapper[4699]: E0226 11:12:40.261018 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.261193 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.261224 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:40 crc kubenswrapper[4699]: E0226 11:12:40.261272 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:40 crc kubenswrapper[4699]: E0226 11:12:40.261605 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:40 crc kubenswrapper[4699]: E0226 11:12:40.261657 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.313723 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.314087 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.314239 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.314328 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.314417 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:40Z","lastTransitionTime":"2026-02-26T11:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.416676 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.416709 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.416720 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.416814 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.416834 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:40Z","lastTransitionTime":"2026-02-26T11:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.519303 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.519343 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.519353 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.519369 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.519378 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:40Z","lastTransitionTime":"2026-02-26T11:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.621819 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.621872 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.621882 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.621897 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.621907 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:40Z","lastTransitionTime":"2026-02-26T11:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.724686 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.724727 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.724736 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.724751 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.724763 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:40Z","lastTransitionTime":"2026-02-26T11:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.826353 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.826388 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.826397 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.826410 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.826420 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:40Z","lastTransitionTime":"2026-02-26T11:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.851665 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-gbl2h"] Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.852222 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gbl2h" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.854448 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.854512 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.854587 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.866898 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:40Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.878657 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:40Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.892033 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:40Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.904521 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:40Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.919607 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:40Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.930334 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.930404 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.930420 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.930438 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.930472 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:40Z","lastTransitionTime":"2026-02-26T11:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.934850 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:40Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.945891 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:40Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:40 crc kubenswrapper[4699]: I0226 11:12:40.961372 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:40Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.009858 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/db105b7b-9325-4f20-a760-06c045ea844f-hosts-file\") pod \"node-resolver-gbl2h\" (UID: \"db105b7b-9325-4f20-a760-06c045ea844f\") " pod="openshift-dns/node-resolver-gbl2h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.009935 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9n6f\" (UniqueName: \"kubernetes.io/projected/db105b7b-9325-4f20-a760-06c045ea844f-kube-api-access-t9n6f\") pod \"node-resolver-gbl2h\" (UID: \"db105b7b-9325-4f20-a760-06c045ea844f\") " pod="openshift-dns/node-resolver-gbl2h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.033247 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.033287 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.033295 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.033309 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.033319 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:41Z","lastTransitionTime":"2026-02-26T11:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.110683 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9n6f\" (UniqueName: \"kubernetes.io/projected/db105b7b-9325-4f20-a760-06c045ea844f-kube-api-access-t9n6f\") pod \"node-resolver-gbl2h\" (UID: \"db105b7b-9325-4f20-a760-06c045ea844f\") " pod="openshift-dns/node-resolver-gbl2h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.110764 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/db105b7b-9325-4f20-a760-06c045ea844f-hosts-file\") pod \"node-resolver-gbl2h\" (UID: \"db105b7b-9325-4f20-a760-06c045ea844f\") " pod="openshift-dns/node-resolver-gbl2h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.110860 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/db105b7b-9325-4f20-a760-06c045ea844f-hosts-file\") pod \"node-resolver-gbl2h\" (UID: \"db105b7b-9325-4f20-a760-06c045ea844f\") " pod="openshift-dns/node-resolver-gbl2h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.135951 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.135995 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.136006 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.136024 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.136036 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:41Z","lastTransitionTime":"2026-02-26T11:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.138684 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9n6f\" (UniqueName: \"kubernetes.io/projected/db105b7b-9325-4f20-a760-06c045ea844f-kube-api-access-t9n6f\") pod \"node-resolver-gbl2h\" (UID: \"db105b7b-9325-4f20-a760-06c045ea844f\") " pod="openshift-dns/node-resolver-gbl2h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.165468 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gbl2h" Feb 26 11:12:41 crc kubenswrapper[4699]: W0226 11:12:41.179026 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb105b7b_9325_4f20_a760_06c045ea844f.slice/crio-f73bc9858ad6a2aa98dfd93ca9f6bf62bd314c1c2fb6f45ef41eced5e8a668ec WatchSource:0}: Error finding container f73bc9858ad6a2aa98dfd93ca9f6bf62bd314c1c2fb6f45ef41eced5e8a668ec: Status 404 returned error can't find the container with id f73bc9858ad6a2aa98dfd93ca9f6bf62bd314c1c2fb6f45ef41eced5e8a668ec Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.225598 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2k6b7"] Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.225973 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-28p79"] Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.226220 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-tfp9h"] Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.226873 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.227529 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.228001 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.231875 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.235368 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.235694 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.235951 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.236298 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.238588 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.238671 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.238607 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.238844 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.239014 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.239206 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.239384 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.243723 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.244295 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.244310 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.244330 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.244346 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:41Z","lastTransitionTime":"2026-02-26T11:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.253414 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.271685 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.288172 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.305361 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313042 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25g9f\" (UniqueName: \"kubernetes.io/projected/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-kube-api-access-25g9f\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313097 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-hostroot\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313135 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-mcd-auth-proxy-config\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313174 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-cnibin\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313195 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-var-lib-cni-multus\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313220 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-system-cni-dir\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313240 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-run-k8s-cni-cncf-io\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313262 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-etc-kubernetes\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313282 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g8bl\" (UniqueName: \"kubernetes.io/projected/32ce77d1-5287-4674-aeda-810070efbb29-kube-api-access-6g8bl\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313297 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-var-lib-cni-bin\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313323 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313342 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/95d160b5-697e-42fa-8cd0-8b7b337820c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313359 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tczc\" (UniqueName: \"kubernetes.io/projected/95d160b5-697e-42fa-8cd0-8b7b337820c4-kube-api-access-5tczc\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313376 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/32ce77d1-5287-4674-aeda-810070efbb29-cni-binary-copy\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313391 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/32ce77d1-5287-4674-aeda-810070efbb29-multus-daemon-config\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313408 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-os-release\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313424 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-multus-socket-dir-parent\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313445 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-var-lib-kubelet\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313461 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-proxy-tls\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313481 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-multus-conf-dir\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313498 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-cnibin\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313512 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-os-release\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313544 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-run-netns\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313596 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95d160b5-697e-42fa-8cd0-8b7b337820c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313620 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-run-multus-certs\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313654 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-multus-cni-dir\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313670 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-system-cni-dir\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.313693 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-rootfs\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.319704 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.340306 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.347860 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.347924 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.347937 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.347954 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.347964 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:41Z","lastTransitionTime":"2026-02-26T11:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.360669 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.377760 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.393093 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.412254 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414536 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-cnibin\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414586 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-os-release\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414608 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-run-netns\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414624 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-run-multus-certs\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414652 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95d160b5-697e-42fa-8cd0-8b7b337820c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414666 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-multus-cni-dir\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414689 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-system-cni-dir\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414710 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-rootfs\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414729 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g9f\" (UniqueName: \"kubernetes.io/projected/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-kube-api-access-25g9f\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414745 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-hostroot\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414760 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-cnibin\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414776 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-var-lib-cni-multus\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414792 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-mcd-auth-proxy-config\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414807 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-system-cni-dir\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414825 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-run-k8s-cni-cncf-io\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414840 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-etc-kubernetes\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414853 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-var-lib-cni-bin\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414869 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g8bl\" (UniqueName: \"kubernetes.io/projected/32ce77d1-5287-4674-aeda-810070efbb29-kube-api-access-6g8bl\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414883 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/95d160b5-697e-42fa-8cd0-8b7b337820c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414899 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tczc\" (UniqueName: \"kubernetes.io/projected/95d160b5-697e-42fa-8cd0-8b7b337820c4-kube-api-access-5tczc\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414914 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/32ce77d1-5287-4674-aeda-810070efbb29-cni-binary-copy\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414935 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/32ce77d1-5287-4674-aeda-810070efbb29-multus-daemon-config\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414964 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414979 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-os-release\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.414996 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-multus-socket-dir-parent\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415010 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-var-lib-kubelet\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415024 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-proxy-tls\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415038 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-multus-conf-dir\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415099 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-multus-conf-dir\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415164 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-run-k8s-cni-cncf-io\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415190 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-etc-kubernetes\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415213 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-var-lib-cni-bin\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415208 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-system-cni-dir\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415269 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-cnibin\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415392 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-os-release\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415414 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-rootfs\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415663 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-system-cni-dir\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415660 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-multus-cni-dir\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415664 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-run-netns\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415730 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-cnibin\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415768 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-hostroot\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415792 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-run-multus-certs\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.415795 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-var-lib-cni-multus\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.416143 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/95d160b5-697e-42fa-8cd0-8b7b337820c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.416291 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95d160b5-697e-42fa-8cd0-8b7b337820c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.416309 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-os-release\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.416396 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-host-var-lib-kubelet\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.416454 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32ce77d1-5287-4674-aeda-810070efbb29-multus-socket-dir-parent\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.416702 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/32ce77d1-5287-4674-aeda-810070efbb29-cni-binary-copy\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.416718 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-mcd-auth-proxy-config\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.416818 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/32ce77d1-5287-4674-aeda-810070efbb29-multus-daemon-config\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.421883 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95d160b5-697e-42fa-8cd0-8b7b337820c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.422649 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-proxy-tls\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.433040 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.435057 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25g9f\" (UniqueName: \"kubernetes.io/projected/e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff-kube-api-access-25g9f\") pod \"machine-config-daemon-28p79\" (UID: \"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\") " pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.437351 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tczc\" (UniqueName: \"kubernetes.io/projected/95d160b5-697e-42fa-8cd0-8b7b337820c4-kube-api-access-5tczc\") pod \"multus-additional-cni-plugins-tfp9h\" (UID: \"95d160b5-697e-42fa-8cd0-8b7b337820c4\") " pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.439059 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g8bl\" (UniqueName: \"kubernetes.io/projected/32ce77d1-5287-4674-aeda-810070efbb29-kube-api-access-6g8bl\") pod \"multus-2k6b7\" (UID: \"32ce77d1-5287-4674-aeda-810070efbb29\") " pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.448936 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.451154 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.451189 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.451198 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.451211 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.451221 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:41Z","lastTransitionTime":"2026-02-26T11:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.464634 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.479632 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.498234 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.515529 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.531031 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.547183 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.553790 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.554064 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.554181 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.554262 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.554333 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:41Z","lastTransitionTime":"2026-02-26T11:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.562344 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.563544 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.572826 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2k6b7" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.582146 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.583685 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.598653 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.604394 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cw6vx"] Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.605840 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.609778 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.610070 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.610233 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.610434 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.610545 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.610682 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.611140 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.624055 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.642920 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.660436 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.660492 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.660506 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.660522 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.660536 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:41Z","lastTransitionTime":"2026-02-26T11:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.664720 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.680912 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.695550 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" event={"ID":"95d160b5-697e-42fa-8cd0-8b7b337820c4","Type":"ContainerStarted","Data":"41434ad1774419837d0cfa44cc794bc1cf04b6d532f022c99cc9412efa657e08"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.706407 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gbl2h" event={"ID":"db105b7b-9325-4f20-a760-06c045ea844f","Type":"ContainerStarted","Data":"ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.706454 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gbl2h" event={"ID":"db105b7b-9325-4f20-a760-06c045ea844f","Type":"ContainerStarted","Data":"f73bc9858ad6a2aa98dfd93ca9f6bf62bd314c1c2fb6f45ef41eced5e8a668ec"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.706499 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.710151 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"75bb32524199dbcede7aa4c16881b59120ad0fef6384c38ee908897771299028"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.713431 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2k6b7" event={"ID":"32ce77d1-5287-4674-aeda-810070efbb29","Type":"ContainerStarted","Data":"e2939f6d7ac86a30ce90043410998ebc04c1e55df27aa2c7369ea114b2b85f39"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.720994 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-bin\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721054 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-slash\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721075 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-log-socket\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721097 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovn-node-metrics-cert\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721156 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-kubelet\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721179 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-systemd-units\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721201 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-ovn\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721221 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnmg2\" (UniqueName: \"kubernetes.io/projected/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-kube-api-access-tnmg2\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721244 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-node-log\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721258 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-netns\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721279 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-openvswitch\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721355 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-ovn-kubernetes\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721414 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721452 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-config\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721478 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-var-lib-openvswitch\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721500 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-etc-openvswitch\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721522 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-netd\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721542 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-env-overrides\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721560 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-script-lib\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.721584 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-systemd\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.723642 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.739067 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.755692 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.762749 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.762797 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.762810 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.762827 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.762839 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:41Z","lastTransitionTime":"2026-02-26T11:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.772518 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.791344 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.810988 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822480 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-env-overrides\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822516 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-script-lib\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822545 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-systemd\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822576 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-bin\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822593 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-slash\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822608 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-log-socket\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822624 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovn-node-metrics-cert\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822652 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-kubelet\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822671 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnmg2\" (UniqueName: \"kubernetes.io/projected/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-kube-api-access-tnmg2\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822689 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-systemd-units\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822705 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-ovn\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822732 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-node-log\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822748 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-openvswitch\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822762 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-netns\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822776 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-ovn-kubernetes\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822789 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-config\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822807 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822823 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-var-lib-openvswitch\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822836 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-etc-openvswitch\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822863 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-netd\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.822937 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-netd\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.823601 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-env-overrides\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.824743 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-script-lib\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.824807 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-systemd\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.825295 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-ovn\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.825378 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-bin\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.825408 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-slash\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.825435 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-log-socket\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.825702 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-node-log\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.825743 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-openvswitch\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.825780 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-netns\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.825845 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-ovn-kubernetes\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.826100 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-systemd-units\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.826193 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-var-lib-openvswitch\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.826268 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.826318 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-kubelet\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.826389 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-config\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.826259 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-etc-openvswitch\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.826306 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.829593 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovn-node-metrics-cert\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.842506 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.848259 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnmg2\" (UniqueName: \"kubernetes.io/projected/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-kube-api-access-tnmg2\") pod \"ovnkube-node-cw6vx\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.869627 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.883183 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.883235 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.883249 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.883268 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.883280 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:41Z","lastTransitionTime":"2026-02-26T11:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.905317 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.922581 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.925955 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:41 crc kubenswrapper[4699]: W0226 11:12:41.942288 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd12b2df_7af6_45bc_88e7_d5e5e6451e65.slice/crio-90b46f5a3e61ec03394a2be7ff4739209b903f31912a7a66807fca0693899985 WatchSource:0}: Error finding container 90b46f5a3e61ec03394a2be7ff4739209b903f31912a7a66807fca0693899985: Status 404 returned error can't find the container with id 90b46f5a3e61ec03394a2be7ff4739209b903f31912a7a66807fca0693899985 Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.948696 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.966369 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.986202 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.986255 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.986276 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.986296 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.986309 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:41Z","lastTransitionTime":"2026-02-26T11:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:41 crc kubenswrapper[4699]: I0226 11:12:41.988603 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.006494 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.040670 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.064309 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.081829 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.093272 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.093322 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.093338 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.093360 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.093379 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:42Z","lastTransitionTime":"2026-02-26T11:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.107666 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.197091 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.197146 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.197159 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.197175 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.197188 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:42Z","lastTransitionTime":"2026-02-26T11:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.260174 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.260257 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:42 crc kubenswrapper[4699]: E0226 11:12:42.260343 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:42 crc kubenswrapper[4699]: E0226 11:12:42.260435 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.260514 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:42 crc kubenswrapper[4699]: E0226 11:12:42.260666 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.302769 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.302828 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.302838 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.302858 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.302873 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:42Z","lastTransitionTime":"2026-02-26T11:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.406141 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.406185 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.406195 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.406213 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.406227 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:42Z","lastTransitionTime":"2026-02-26T11:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.509013 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.509062 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.509073 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.509092 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.509103 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:42Z","lastTransitionTime":"2026-02-26T11:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.612343 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.612415 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.612428 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.612452 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.612465 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:42Z","lastTransitionTime":"2026-02-26T11:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.715607 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.715638 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.715647 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.715659 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.715668 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:42Z","lastTransitionTime":"2026-02-26T11:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.721558 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.721603 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.723814 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2k6b7" event={"ID":"32ce77d1-5287-4674-aeda-810070efbb29","Type":"ContainerStarted","Data":"b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.725583 4699 generic.go:334] "Generic (PLEG): container finished" podID="95d160b5-697e-42fa-8cd0-8b7b337820c4" containerID="dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01" exitCode=0 Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.725643 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" event={"ID":"95d160b5-697e-42fa-8cd0-8b7b337820c4","Type":"ContainerDied","Data":"dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.729165 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124" exitCode=0 Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.729253 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.729475 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"90b46f5a3e61ec03394a2be7ff4739209b903f31912a7a66807fca0693899985"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.737625 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.757788 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.776220 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.792443 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.817945 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.817981 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.817993 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.818009 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.818021 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:42Z","lastTransitionTime":"2026-02-26T11:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.818743 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.834547 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.851104 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.868658 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.881563 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.898135 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.913497 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.921354 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.921409 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.921425 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.921442 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.921455 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:42Z","lastTransitionTime":"2026-02-26T11:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.931144 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.951084 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.963545 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.979697 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:42 crc kubenswrapper[4699]: I0226 11:12:42.999730 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:42Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.018208 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.024142 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.024193 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.024205 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.024226 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.024239 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.033374 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.061126 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.089943 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.108373 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.124729 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.129501 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.129608 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.129687 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.129710 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.129780 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.150181 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.165529 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.241033 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.241082 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.241096 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.241134 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.241150 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.348806 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.348853 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.348862 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.348879 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.348911 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.451579 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.451622 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.451634 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.451651 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.451665 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.555007 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.555044 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.555056 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.555073 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.555085 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.657592 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.657634 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.657644 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.657661 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.657673 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.739529 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.739770 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.739846 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.739905 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.739998 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.742126 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" event={"ID":"95d160b5-697e-42fa-8cd0-8b7b337820c4","Type":"ContainerStarted","Data":"83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.760889 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.760867 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.760928 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.761088 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.761103 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.761131 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.785736 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.804995 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.821334 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.837080 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.850928 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.865110 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.865445 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.865541 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.865645 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.865727 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.870058 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.870355 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.870444 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.870560 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.870649 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.874981 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: E0226 11:12:43.888557 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.893810 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.894372 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.894433 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.894473 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.894489 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.894502 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: E0226 11:12:43.910594 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.911748 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.916920 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.917008 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.917022 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.917039 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.917094 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.940393 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: E0226 11:12:43.943780 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.950387 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.950481 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.950494 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.950514 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.950527 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.961780 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: E0226 11:12:43.963908 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.975562 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.975631 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.975645 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.975665 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.975679 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.990973 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: E0226 11:12:43.991564 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:43 crc kubenswrapper[4699]: E0226 11:12:43.991711 4699 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.993878 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.993930 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.993944 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.993963 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:43 crc kubenswrapper[4699]: I0226 11:12:43.993979 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:43Z","lastTransitionTime":"2026-02-26T11:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.096444 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.096509 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.096527 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.096554 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.096571 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:44Z","lastTransitionTime":"2026-02-26T11:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.198826 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.198868 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.198878 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.198895 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.198907 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:44Z","lastTransitionTime":"2026-02-26T11:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.260484 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.260634 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:44 crc kubenswrapper[4699]: E0226 11:12:44.260753 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.260815 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:44 crc kubenswrapper[4699]: E0226 11:12:44.261002 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:44 crc kubenswrapper[4699]: E0226 11:12:44.261187 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.301813 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.301854 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.301863 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.301877 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.301888 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:44Z","lastTransitionTime":"2026-02-26T11:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.403821 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.403863 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.403878 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.403895 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.403906 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:44Z","lastTransitionTime":"2026-02-26T11:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.506575 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.506615 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.506625 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.506641 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.506653 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:44Z","lastTransitionTime":"2026-02-26T11:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.608715 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.608763 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.608775 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.608789 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.608799 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:44Z","lastTransitionTime":"2026-02-26T11:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.711477 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.711517 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.711526 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.711541 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.711553 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:44Z","lastTransitionTime":"2026-02-26T11:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.751051 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f"} Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.813787 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.813826 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.813836 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.813851 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.813863 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:44Z","lastTransitionTime":"2026-02-26T11:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.922066 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.922149 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.922164 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.922180 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:44 crc kubenswrapper[4699]: I0226 11:12:44.922520 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:44Z","lastTransitionTime":"2026-02-26T11:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.026645 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.026877 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.026890 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.026907 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.026919 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:45Z","lastTransitionTime":"2026-02-26T11:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.135326 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.135396 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.135408 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.135425 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.135458 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:45Z","lastTransitionTime":"2026-02-26T11:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.238174 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.238293 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.238303 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.238316 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.238328 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:45Z","lastTransitionTime":"2026-02-26T11:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.276672 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.341752 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.341797 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.341810 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.341855 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.341870 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:45Z","lastTransitionTime":"2026-02-26T11:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.444294 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.444337 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.444349 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.444364 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.444378 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:45Z","lastTransitionTime":"2026-02-26T11:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.547358 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.547403 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.547414 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.547431 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.547447 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:45Z","lastTransitionTime":"2026-02-26T11:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.650494 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.650538 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.650547 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.650563 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.650573 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:45Z","lastTransitionTime":"2026-02-26T11:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.753214 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.753265 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.753275 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.753293 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.753314 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:45Z","lastTransitionTime":"2026-02-26T11:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.756617 4699 generic.go:334] "Generic (PLEG): container finished" podID="95d160b5-697e-42fa-8cd0-8b7b337820c4" containerID="83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61" exitCode=0 Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.756760 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" event={"ID":"95d160b5-697e-42fa-8cd0-8b7b337820c4","Type":"ContainerDied","Data":"83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.777551 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.800202 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.835190 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.850518 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.857042 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.857079 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.857089 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.857106 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.857135 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:45Z","lastTransitionTime":"2026-02-26T11:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.866352 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.885460 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.901461 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.918928 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.937169 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.953796 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.959476 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.959516 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.959527 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.959545 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.959580 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:45Z","lastTransitionTime":"2026-02-26T11:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.978216 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:45 crc kubenswrapper[4699]: I0226 11:12:45.999691 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.022896 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.064659 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.064722 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.064739 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.064760 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.064777 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:46Z","lastTransitionTime":"2026-02-26T11:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.167739 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.168353 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.168384 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.168403 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.168416 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:46Z","lastTransitionTime":"2026-02-26T11:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.260621 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:46 crc kubenswrapper[4699]: E0226 11:12:46.261371 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.260748 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:46 crc kubenswrapper[4699]: E0226 11:12:46.261542 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.260708 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:46 crc kubenswrapper[4699]: E0226 11:12:46.261626 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.270641 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.270690 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.270703 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.270727 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.270742 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:46Z","lastTransitionTime":"2026-02-26T11:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.277235 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.292924 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.308787 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.327101 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.343100 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.358588 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.373835 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.373897 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.373913 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.373935 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.373947 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:46Z","lastTransitionTime":"2026-02-26T11:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.381061 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.405228 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.422554 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.440658 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.464621 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.476608 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.476672 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.476687 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.476713 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.476731 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:46Z","lastTransitionTime":"2026-02-26T11:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.480727 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.501897 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.580293 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.580351 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.580364 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.580381 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.580393 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:46Z","lastTransitionTime":"2026-02-26T11:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.683534 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.683571 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.683582 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.683600 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.683611 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:46Z","lastTransitionTime":"2026-02-26T11:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.767167 4699 generic.go:334] "Generic (PLEG): container finished" podID="95d160b5-697e-42fa-8cd0-8b7b337820c4" containerID="677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b" exitCode=0 Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.767282 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" event={"ID":"95d160b5-697e-42fa-8cd0-8b7b337820c4","Type":"ContainerDied","Data":"677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b"} Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.774158 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f"} Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.783611 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.788258 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.788314 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.788326 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.788343 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.788356 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:46Z","lastTransitionTime":"2026-02-26T11:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.816172 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.835536 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.854166 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.872960 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.890108 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.893560 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.893616 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.893626 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.893661 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.893673 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:46Z","lastTransitionTime":"2026-02-26T11:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.918114 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.938015 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.960532 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.981088 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.997007 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.997055 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.997065 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.997081 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:46 crc kubenswrapper[4699]: I0226 11:12:46.997091 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:46Z","lastTransitionTime":"2026-02-26T11:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.004984 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.022905 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.041882 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.100911 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.100952 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.100962 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.100977 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.100988 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:47Z","lastTransitionTime":"2026-02-26T11:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.205876 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.205936 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.205945 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.205962 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.205972 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:47Z","lastTransitionTime":"2026-02-26T11:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.308636 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.308668 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.308677 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.308719 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.308729 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:47Z","lastTransitionTime":"2026-02-26T11:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.411896 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.411931 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.411942 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.411958 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.411970 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:47Z","lastTransitionTime":"2026-02-26T11:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.514527 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.514559 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.514568 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.514581 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.514594 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:47Z","lastTransitionTime":"2026-02-26T11:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.617005 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.617052 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.617063 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.617081 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.617095 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:47Z","lastTransitionTime":"2026-02-26T11:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.719913 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.719956 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.719967 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.719983 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.719994 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:47Z","lastTransitionTime":"2026-02-26T11:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.751086 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gs59q"] Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.751553 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gs59q" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.754529 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.754841 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.754885 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.755583 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.770193 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.779937 4699 generic.go:334] "Generic (PLEG): container finished" podID="95d160b5-697e-42fa-8cd0-8b7b337820c4" containerID="a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9" exitCode=0 Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.779988 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" event={"ID":"95d160b5-697e-42fa-8cd0-8b7b337820c4","Type":"ContainerDied","Data":"a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9"} Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.788255 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.803969 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.818886 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.823929 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.823984 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.823994 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.824011 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.824022 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:47Z","lastTransitionTime":"2026-02-26T11:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.835401 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.850181 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.871306 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.884333 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.897157 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/adbc7948-b89f-46f1-8ebd-c5406fee4e30-host\") pod \"node-ca-gs59q\" (UID: \"adbc7948-b89f-46f1-8ebd-c5406fee4e30\") " pod="openshift-image-registry/node-ca-gs59q" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.897213 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8kzk\" (UniqueName: \"kubernetes.io/projected/adbc7948-b89f-46f1-8ebd-c5406fee4e30-kube-api-access-k8kzk\") pod \"node-ca-gs59q\" (UID: \"adbc7948-b89f-46f1-8ebd-c5406fee4e30\") " pod="openshift-image-registry/node-ca-gs59q" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.897242 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/adbc7948-b89f-46f1-8ebd-c5406fee4e30-serviceca\") pod \"node-ca-gs59q\" (UID: \"adbc7948-b89f-46f1-8ebd-c5406fee4e30\") " pod="openshift-image-registry/node-ca-gs59q" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.901210 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.918426 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.929913 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.929974 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.929987 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.930007 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.930021 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:47Z","lastTransitionTime":"2026-02-26T11:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.934764 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.981415 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:47Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.998650 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/adbc7948-b89f-46f1-8ebd-c5406fee4e30-host\") pod \"node-ca-gs59q\" (UID: \"adbc7948-b89f-46f1-8ebd-c5406fee4e30\") " pod="openshift-image-registry/node-ca-gs59q" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.998708 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8kzk\" (UniqueName: \"kubernetes.io/projected/adbc7948-b89f-46f1-8ebd-c5406fee4e30-kube-api-access-k8kzk\") pod \"node-ca-gs59q\" (UID: \"adbc7948-b89f-46f1-8ebd-c5406fee4e30\") " pod="openshift-image-registry/node-ca-gs59q" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.998735 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/adbc7948-b89f-46f1-8ebd-c5406fee4e30-serviceca\") pod \"node-ca-gs59q\" (UID: \"adbc7948-b89f-46f1-8ebd-c5406fee4e30\") " pod="openshift-image-registry/node-ca-gs59q" Feb 26 11:12:47 crc kubenswrapper[4699]: I0226 11:12:47.998759 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/adbc7948-b89f-46f1-8ebd-c5406fee4e30-host\") pod \"node-ca-gs59q\" (UID: \"adbc7948-b89f-46f1-8ebd-c5406fee4e30\") " pod="openshift-image-registry/node-ca-gs59q" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.000006 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/adbc7948-b89f-46f1-8ebd-c5406fee4e30-serviceca\") pod \"node-ca-gs59q\" (UID: \"adbc7948-b89f-46f1-8ebd-c5406fee4e30\") " pod="openshift-image-registry/node-ca-gs59q" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.011924 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.028610 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8kzk\" (UniqueName: \"kubernetes.io/projected/adbc7948-b89f-46f1-8ebd-c5406fee4e30-kube-api-access-k8kzk\") pod \"node-ca-gs59q\" (UID: \"adbc7948-b89f-46f1-8ebd-c5406fee4e30\") " pod="openshift-image-registry/node-ca-gs59q" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.032907 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.032945 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.032954 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.032987 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.032997 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:48Z","lastTransitionTime":"2026-02-26T11:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.041704 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.061304 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.069352 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gs59q" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.076503 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: W0226 11:12:48.081221 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadbc7948_b89f_46f1_8ebd_c5406fee4e30.slice/crio-e19702bf616d777e4c3b196bddff9586305430656663d47d5076a6ca0becb46d WatchSource:0}: Error finding container e19702bf616d777e4c3b196bddff9586305430656663d47d5076a6ca0becb46d: Status 404 returned error can't find the container with id e19702bf616d777e4c3b196bddff9586305430656663d47d5076a6ca0becb46d Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.093713 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.115161 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.130754 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.134758 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.134804 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.134813 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.134830 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.134841 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:48Z","lastTransitionTime":"2026-02-26T11:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.146649 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.165517 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.181304 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.199076 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.221740 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.238058 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.238470 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.238486 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.238502 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.238512 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:48Z","lastTransitionTime":"2026-02-26T11:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.238408 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.254023 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.260547 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.260659 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:48 crc kubenswrapper[4699]: E0226 11:12:48.260820 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.260880 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:48 crc kubenswrapper[4699]: E0226 11:12:48.261033 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:48 crc kubenswrapper[4699]: E0226 11:12:48.260937 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.273496 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.285069 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.341402 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.341883 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.341964 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.342040 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.342120 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:48Z","lastTransitionTime":"2026-02-26T11:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.444689 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.444766 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.444779 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.444794 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.444909 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:48Z","lastTransitionTime":"2026-02-26T11:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.548151 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.548212 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.548224 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.548245 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.548259 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:48Z","lastTransitionTime":"2026-02-26T11:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.650907 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.650937 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.650948 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.650963 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.650975 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:48Z","lastTransitionTime":"2026-02-26T11:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.754091 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.754163 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.754173 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.754187 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.754197 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:48Z","lastTransitionTime":"2026-02-26T11:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.784367 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gs59q" event={"ID":"adbc7948-b89f-46f1-8ebd-c5406fee4e30","Type":"ContainerStarted","Data":"e19702bf616d777e4c3b196bddff9586305430656663d47d5076a6ca0becb46d"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.788290 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" event={"ID":"95d160b5-697e-42fa-8cd0-8b7b337820c4","Type":"ContainerStarted","Data":"fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.805785 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.821107 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.841257 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.857550 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.857614 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.857626 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.857653 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.857668 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:48Z","lastTransitionTime":"2026-02-26T11:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.864687 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.881777 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.898726 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.917974 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.933697 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.957030 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.961785 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.961834 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.961851 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.961871 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.961885 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:48Z","lastTransitionTime":"2026-02-26T11:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.973631 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:48 crc kubenswrapper[4699]: I0226 11:12:48.990932 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:48Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.013903 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.030690 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.044280 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.064522 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.064570 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.064588 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.064610 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.064622 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:49Z","lastTransitionTime":"2026-02-26T11:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.167023 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.167067 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.167078 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.167096 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.167109 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:49Z","lastTransitionTime":"2026-02-26T11:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.269547 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.269578 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.269586 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.269601 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.269800 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:49Z","lastTransitionTime":"2026-02-26T11:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.371890 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.371927 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.371944 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.371965 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.371977 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:49Z","lastTransitionTime":"2026-02-26T11:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.474070 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.474151 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.474161 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.474180 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.474192 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:49Z","lastTransitionTime":"2026-02-26T11:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.576800 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.576849 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.576860 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.576874 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.576884 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:49Z","lastTransitionTime":"2026-02-26T11:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.679990 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.680038 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.680047 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.680065 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.680079 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:49Z","lastTransitionTime":"2026-02-26T11:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.783315 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.783369 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.783378 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.783392 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.783403 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:49Z","lastTransitionTime":"2026-02-26T11:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.792560 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gs59q" event={"ID":"adbc7948-b89f-46f1-8ebd-c5406fee4e30","Type":"ContainerStarted","Data":"6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.797437 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.798028 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.798084 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.798104 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.818227 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.832594 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.841727 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.848467 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.848993 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.863949 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.876457 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.885417 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.885445 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.885453 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.885467 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.885476 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:49Z","lastTransitionTime":"2026-02-26T11:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.890658 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.906489 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.920480 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.935446 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.959975 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.972671 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.987714 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.987766 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.987776 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.987791 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.987801 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:49Z","lastTransitionTime":"2026-02-26T11:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:49 crc kubenswrapper[4699]: I0226 11:12:49.990557 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:49Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.007813 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.025068 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.042749 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.060570 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.087858 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.090622 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.090664 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.090674 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.090690 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.090700 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:50Z","lastTransitionTime":"2026-02-26T11:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.113516 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.127476 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.147047 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.163797 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.178297 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.193357 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.193402 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.193414 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.193441 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.193455 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:50Z","lastTransitionTime":"2026-02-26T11:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.195141 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.216262 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.232468 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.245541 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.260136 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.260416 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:50 crc kubenswrapper[4699]: E0226 11:12:50.260548 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.260892 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:50 crc kubenswrapper[4699]: E0226 11:12:50.260958 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:50 crc kubenswrapper[4699]: E0226 11:12:50.261006 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.270677 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.282864 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:50Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.298646 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.298725 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.298749 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.298775 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.298789 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:50Z","lastTransitionTime":"2026-02-26T11:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.401473 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.401514 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.401527 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.401545 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.401556 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:50Z","lastTransitionTime":"2026-02-26T11:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.505233 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.505292 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.505303 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.505319 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.505330 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:50Z","lastTransitionTime":"2026-02-26T11:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.607918 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.607951 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.607961 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.607995 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.608009 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:50Z","lastTransitionTime":"2026-02-26T11:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.710606 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.710651 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.710664 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.710679 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.710689 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:50Z","lastTransitionTime":"2026-02-26T11:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.813736 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.813768 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.813776 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.813791 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.813800 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:50Z","lastTransitionTime":"2026-02-26T11:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.915995 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.916038 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.916049 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.916065 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:50 crc kubenswrapper[4699]: I0226 11:12:50.916083 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:50Z","lastTransitionTime":"2026-02-26T11:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.039904 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.039935 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.039944 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.039958 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.039967 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:51Z","lastTransitionTime":"2026-02-26T11:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.142081 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.142133 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.142144 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.142179 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.142192 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:51Z","lastTransitionTime":"2026-02-26T11:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.244834 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.244869 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.244877 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.244894 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.244903 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:51Z","lastTransitionTime":"2026-02-26T11:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.346884 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.346932 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.346945 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.346962 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.346978 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:51Z","lastTransitionTime":"2026-02-26T11:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.448895 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.448935 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.448947 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.448963 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.448974 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:51Z","lastTransitionTime":"2026-02-26T11:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.550726 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.550758 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.550767 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.550780 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.550790 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:51Z","lastTransitionTime":"2026-02-26T11:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.653092 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.653150 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.653161 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.653174 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.653184 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:51Z","lastTransitionTime":"2026-02-26T11:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.755575 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.755623 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.755637 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.755653 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.755664 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:51Z","lastTransitionTime":"2026-02-26T11:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.811876 4699 generic.go:334] "Generic (PLEG): container finished" podID="95d160b5-697e-42fa-8cd0-8b7b337820c4" containerID="fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38" exitCode=0 Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.811973 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" event={"ID":"95d160b5-697e-42fa-8cd0-8b7b337820c4","Type":"ContainerDied","Data":"fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.827049 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:51Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.841366 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:51Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.854011 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:51Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.857673 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.857717 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.857729 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.857745 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.857758 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:51Z","lastTransitionTime":"2026-02-26T11:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.875002 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:51Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.887491 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:51Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.900666 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:51Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.915910 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:51Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.933211 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:51Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.955255 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:51Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.964154 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.964241 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.964258 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.964279 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.964292 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:51Z","lastTransitionTime":"2026-02-26T11:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:51 crc kubenswrapper[4699]: I0226 11:12:51.985750 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:51Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.001309 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:51Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.017956 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.030997 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.044021 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.066630 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.066666 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.066676 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.066692 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.066711 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:52Z","lastTransitionTime":"2026-02-26T11:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.149503 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.149759 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:24.149731824 +0000 UTC m=+149.960558258 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.169314 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.169574 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.169679 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.169786 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.169963 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:52Z","lastTransitionTime":"2026-02-26T11:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.251014 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.251074 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.251097 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.251159 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251261 4699 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251310 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:24.251297911 +0000 UTC m=+150.062124345 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251477 4699 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251501 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:24.251494526 +0000 UTC m=+150.062320960 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251623 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251638 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251647 4699 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251669 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:24.251663131 +0000 UTC m=+150.062489565 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251926 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251949 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251958 4699 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.251987 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:24.251977389 +0000 UTC m=+150.062803823 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.260105 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.260111 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.260136 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.260329 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.260477 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:52 crc kubenswrapper[4699]: E0226 11:12:52.260681 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.272847 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.272901 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.272915 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.272936 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.272949 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:52Z","lastTransitionTime":"2026-02-26T11:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.375060 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.375107 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.375143 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.375164 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.375175 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:52Z","lastTransitionTime":"2026-02-26T11:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.477653 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.477698 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.477711 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.477727 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.477740 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:52Z","lastTransitionTime":"2026-02-26T11:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.579474 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.579519 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.579533 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.579549 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.579560 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:52Z","lastTransitionTime":"2026-02-26T11:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.683297 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.683341 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.683352 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.683368 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.683380 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:52Z","lastTransitionTime":"2026-02-26T11:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.786254 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.786298 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.786313 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.786331 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.786344 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:52Z","lastTransitionTime":"2026-02-26T11:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.820261 4699 generic.go:334] "Generic (PLEG): container finished" podID="95d160b5-697e-42fa-8cd0-8b7b337820c4" containerID="df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c" exitCode=0 Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.820319 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" event={"ID":"95d160b5-697e-42fa-8cd0-8b7b337820c4","Type":"ContainerDied","Data":"df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c"} Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.835667 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.853105 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.875070 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.890223 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.890286 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.890301 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.890323 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.890336 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:52Z","lastTransitionTime":"2026-02-26T11:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.899422 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.914039 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.933047 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.948455 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.962043 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.980625 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.993146 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.993493 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.993531 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.993543 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.993559 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:52 crc kubenswrapper[4699]: I0226 11:12:52.993571 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:52Z","lastTransitionTime":"2026-02-26T11:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.010067 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.022919 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.043250 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.055508 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.095702 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.095742 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.095753 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.095772 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.095788 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:53Z","lastTransitionTime":"2026-02-26T11:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.198421 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.198456 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.198468 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.198482 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.198493 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:53Z","lastTransitionTime":"2026-02-26T11:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.260742 4699 scope.go:117] "RemoveContainer" containerID="eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.301493 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.301538 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.301550 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.301596 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.301610 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:53Z","lastTransitionTime":"2026-02-26T11:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.405795 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.405823 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.405832 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.405845 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.405856 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:53Z","lastTransitionTime":"2026-02-26T11:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.508401 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.508433 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.508442 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.508455 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.508466 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:53Z","lastTransitionTime":"2026-02-26T11:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.610769 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.610811 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.610822 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.610839 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.610849 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:53Z","lastTransitionTime":"2026-02-26T11:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.711572 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4"] Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.712133 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.712942 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.712970 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.712983 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.713019 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.713034 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:53Z","lastTransitionTime":"2026-02-26T11:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.714056 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.714216 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.727695 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.739367 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.753953 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.768814 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dd0f846-a702-4f37-a862-f620cb23e7bf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.768861 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnqfc\" (UniqueName: \"kubernetes.io/projected/6dd0f846-a702-4f37-a862-f620cb23e7bf-kube-api-access-rnqfc\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.768880 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dd0f846-a702-4f37-a862-f620cb23e7bf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.768905 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dd0f846-a702-4f37-a862-f620cb23e7bf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.776499 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.787889 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.802348 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.814190 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.815637 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.815674 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.815686 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.815704 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.815718 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:53Z","lastTransitionTime":"2026-02-26T11:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.824919 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.826421 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.826751 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.830847 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" event={"ID":"95d160b5-697e-42fa-8cd0-8b7b337820c4","Type":"ContainerStarted","Data":"aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.831174 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.844728 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.855819 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.869745 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dd0f846-a702-4f37-a862-f620cb23e7bf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.869831 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dd0f846-a702-4f37-a862-f620cb23e7bf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.869913 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.870989 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dd0f846-a702-4f37-a862-f620cb23e7bf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.871388 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dd0f846-a702-4f37-a862-f620cb23e7bf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.871792 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dd0f846-a702-4f37-a862-f620cb23e7bf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.872444 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnqfc\" (UniqueName: \"kubernetes.io/projected/6dd0f846-a702-4f37-a862-f620cb23e7bf-kube-api-access-rnqfc\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.883147 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dd0f846-a702-4f37-a862-f620cb23e7bf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.885520 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.889544 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnqfc\" (UniqueName: \"kubernetes.io/projected/6dd0f846-a702-4f37-a862-f620cb23e7bf-kube-api-access-rnqfc\") pod \"ovnkube-control-plane-749d76644c-9nrn4\" (UID: \"6dd0f846-a702-4f37-a862-f620cb23e7bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.896398 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.913489 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.917731 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.917780 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.917791 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.917832 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.917857 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:53Z","lastTransitionTime":"2026-02-26T11:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.922798 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.935159 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.946335 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.958926 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.978414 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:53 crc kubenswrapper[4699]: I0226 11:12:53.990242 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.004388 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.018294 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.023056 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.023092 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.023101 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.023136 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.023148 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.025722 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.036024 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.053456 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: W0226 11:12:54.071482 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dd0f846_a702_4f37_a862_f620cb23e7bf.slice/crio-c26b394f33ef5ce7f27295b839e9a69160522b06071e4f8a8bb4a77d3876bc05 WatchSource:0}: Error finding container c26b394f33ef5ce7f27295b839e9a69160522b06071e4f8a8bb4a77d3876bc05: Status 404 returned error can't find the container with id c26b394f33ef5ce7f27295b839e9a69160522b06071e4f8a8bb4a77d3876bc05 Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.072657 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.088416 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.104993 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.119840 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.125797 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.125840 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.125852 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.125868 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.125881 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.141920 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.155990 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.228093 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.228141 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.228149 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.228161 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.228170 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.260180 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.260226 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.260294 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.260344 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.260444 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.260523 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.332315 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.332352 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.332362 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.332375 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.332386 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.385861 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.385913 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.385924 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.385942 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.385954 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.402057 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.407665 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.407726 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.407737 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.407756 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.407770 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.422279 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.428440 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.428494 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.428505 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.428523 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.428535 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.442878 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.449639 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.449696 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.449707 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.449726 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.449741 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.457034 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-v5ctv"] Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.457769 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.457867 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.468265 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.473322 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.473357 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.473365 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.473377 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.473387 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.477018 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.478428 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.478540 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phthm\" (UniqueName: \"kubernetes.io/projected/6956c039-cf77-429b-8f7f-f93ba195d321-kube-api-access-phthm\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.491330 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.491683 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.491839 4699 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.493528 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.493572 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.493596 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.493614 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.493630 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.508634 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.525076 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.542160 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.556204 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.579628 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phthm\" (UniqueName: \"kubernetes.io/projected/6956c039-cf77-429b-8f7f-f93ba195d321-kube-api-access-phthm\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.579708 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.579837 4699 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:12:54 crc kubenswrapper[4699]: E0226 11:12:54.579900 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs podName:6956c039-cf77-429b-8f7f-f93ba195d321 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:55.079883816 +0000 UTC m=+120.890710250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs") pod "network-metrics-daemon-v5ctv" (UID: "6956c039-cf77-429b-8f7f-f93ba195d321") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.586339 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.595432 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.595465 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.595474 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.595489 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.595501 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.598922 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phthm\" (UniqueName: \"kubernetes.io/projected/6956c039-cf77-429b-8f7f-f93ba195d321-kube-api-access-phthm\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.601084 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.619358 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.646998 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.662695 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.675898 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.697938 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.698000 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.698012 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.698049 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.698060 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.701206 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.718289 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.742526 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.755885 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.801697 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.801747 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.801760 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.801781 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.801795 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.837975 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" event={"ID":"6dd0f846-a702-4f37-a862-f620cb23e7bf","Type":"ContainerStarted","Data":"19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.838042 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" event={"ID":"6dd0f846-a702-4f37-a862-f620cb23e7bf","Type":"ContainerStarted","Data":"a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.838055 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" event={"ID":"6dd0f846-a702-4f37-a862-f620cb23e7bf","Type":"ContainerStarted","Data":"c26b394f33ef5ce7f27295b839e9a69160522b06071e4f8a8bb4a77d3876bc05"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.868459 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.907508 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.916718 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.916769 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.916781 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.916799 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.916813 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:54Z","lastTransitionTime":"2026-02-26T11:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.929673 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.943479 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.971588 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:54 crc kubenswrapper[4699]: I0226 11:12:54.992071 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.011399 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.019872 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.019936 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.019947 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.019961 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.019974 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:55Z","lastTransitionTime":"2026-02-26T11:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.025677 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.040733 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.055205 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.068999 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.088658 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:12:55 crc kubenswrapper[4699]: E0226 11:12:55.088839 4699 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.088999 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: E0226 11:12:55.088918 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs podName:6956c039-cf77-429b-8f7f-f93ba195d321 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:56.088900231 +0000 UTC m=+121.899726665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs") pod "network-metrics-daemon-v5ctv" (UID: "6956c039-cf77-429b-8f7f-f93ba195d321") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.106049 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.124614 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.124655 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.124666 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.124682 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.124694 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:55Z","lastTransitionTime":"2026-02-26T11:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.126615 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.150108 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.167717 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.228487 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.228533 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.228546 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.228563 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.228575 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:55Z","lastTransitionTime":"2026-02-26T11:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.331099 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.331150 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.331163 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.331178 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.331188 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:55Z","lastTransitionTime":"2026-02-26T11:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.435605 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.435678 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.435708 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.435734 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.435752 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:55Z","lastTransitionTime":"2026-02-26T11:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.539207 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.539246 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.539255 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.539267 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.539276 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:55Z","lastTransitionTime":"2026-02-26T11:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.641989 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.642050 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.642065 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.642084 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.642099 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:55Z","lastTransitionTime":"2026-02-26T11:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.744398 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.744433 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.744444 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.744458 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.744472 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:55Z","lastTransitionTime":"2026-02-26T11:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.846686 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.847339 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovnkube-controller/0.log" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.847408 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.847456 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.847502 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.847520 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:55Z","lastTransitionTime":"2026-02-26T11:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.852287 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df" exitCode=1 Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.852393 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df"} Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.853369 4699 scope.go:117] "RemoveContainer" containerID="2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.868317 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.886076 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.901005 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.919017 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.936562 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.950798 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.950873 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.950892 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.950915 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.950932 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:55Z","lastTransitionTime":"2026-02-26T11:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.956666 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.969259 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:55 crc kubenswrapper[4699]: I0226 11:12:55.989746 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:55Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.004021 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.022721 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.037220 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.052274 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.053062 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.053093 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.053103 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.053135 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.053149 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:12:56Z","lastTransitionTime":"2026-02-26T11:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.077387 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:12:55Z\\\",\\\"message\\\":\\\"6 11:12:55.725598 6475 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:12:55.725608 6475 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 11:12:55.725632 6475 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 11:12:55.725634 6475 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 11:12:55.725643 6475 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:12:55.725661 6475 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 11:12:55.725662 6475 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 11:12:55.725670 6475 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 11:12:55.725679 6475 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:12:55.725699 6475 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 11:12:55.725719 6475 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 11:12:55.725726 6475 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 11:12:55.725742 6475 factory.go:656] Stopping watch factory\\\\nI0226 11:12:55.725744 6475 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:12:55.725764 6475 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:12:55.725766 6475 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.093476 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.100322 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:12:56 crc kubenswrapper[4699]: E0226 11:12:56.100709 4699 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:12:56 crc kubenswrapper[4699]: E0226 11:12:56.100775 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs podName:6956c039-cf77-429b-8f7f-f93ba195d321 nodeName:}" failed. No retries permitted until 2026-02-26 11:12:58.100758092 +0000 UTC m=+123.911584526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs") pod "network-metrics-daemon-v5ctv" (UID: "6956c039-cf77-429b-8f7f-f93ba195d321") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.111053 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.129045 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: E0226 11:12:56.153452 4699 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.259806 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.259860 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.259946 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:12:56 crc kubenswrapper[4699]: E0226 11:12:56.259936 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.260008 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:56 crc kubenswrapper[4699]: E0226 11:12:56.260179 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:56 crc kubenswrapper[4699]: E0226 11:12:56.260285 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:12:56 crc kubenswrapper[4699]: E0226 11:12:56.260359 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.290298 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.306877 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.319667 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.331959 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.357365 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.372837 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.392683 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.406795 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.423745 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.440178 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.456499 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.478801 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.499167 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.516385 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: E0226 11:12:56.557913 4699 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.559400 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:12:55Z\\\",\\\"message\\\":\\\"6 11:12:55.725598 6475 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:12:55.725608 6475 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 11:12:55.725632 6475 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 11:12:55.725634 6475 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 11:12:55.725643 6475 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:12:55.725661 6475 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 11:12:55.725662 6475 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 11:12:55.725670 6475 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 11:12:55.725679 6475 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:12:55.725699 6475 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 11:12:55.725719 6475 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 11:12:55.725726 6475 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 11:12:55.725742 6475 factory.go:656] Stopping watch factory\\\\nI0226 11:12:55.725744 6475 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:12:55.725764 6475 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:12:55.725766 6475 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.579642 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.860006 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovnkube-controller/0.log" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.864318 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396"} Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.865023 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.895370 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.914003 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.933045 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.945963 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.960616 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.972010 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:56 crc kubenswrapper[4699]: I0226 11:12:56.987248 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:56Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.004456 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.017625 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.028656 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.047712 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:12:55Z\\\",\\\"message\\\":\\\"6 11:12:55.725598 6475 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:12:55.725608 6475 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 11:12:55.725632 6475 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 11:12:55.725634 6475 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 11:12:55.725643 6475 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:12:55.725661 6475 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 11:12:55.725662 6475 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 11:12:55.725670 6475 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 11:12:55.725679 6475 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:12:55.725699 6475 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 11:12:55.725719 6475 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 11:12:55.725726 6475 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 11:12:55.725742 6475 factory.go:656] Stopping watch factory\\\\nI0226 11:12:55.725744 6475 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:12:55.725764 6475 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:12:55.725766 6475 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.058657 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.076974 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.088363 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.100881 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.111768 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.869331 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovnkube-controller/1.log" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.870509 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovnkube-controller/0.log" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.873351 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396" exitCode=1 Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.873388 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396"} Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.873443 4699 scope.go:117] "RemoveContainer" containerID="2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.874035 4699 scope.go:117] "RemoveContainer" containerID="46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396" Feb 26 11:12:57 crc kubenswrapper[4699]: E0226 11:12:57.874224 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cw6vx_openshift-ovn-kubernetes(cd12b2df-7af6-45bc-88e7-d5e5e6451e65)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.889512 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.905067 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.918445 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.932056 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.953860 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.968298 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:57 crc kubenswrapper[4699]: I0226 11:12:57.987171 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:57.999928 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:57Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.014572 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.026485 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.039883 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.053787 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.064648 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.082664 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2415fafc678da73fd9c30a65db8d0b812a049df0797deb080a2118c22e4536df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:12:55Z\\\",\\\"message\\\":\\\"6 11:12:55.725598 6475 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:12:55.725608 6475 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 11:12:55.725632 6475 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 11:12:55.725634 6475 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 11:12:55.725643 6475 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:12:55.725661 6475 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 11:12:55.725662 6475 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 11:12:55.725670 6475 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 11:12:55.725679 6475 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:12:55.725699 6475 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 11:12:55.725719 6475 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 11:12:55.725726 6475 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 11:12:55.725742 6475 factory.go:656] Stopping watch factory\\\\nI0226 11:12:55.725744 6475 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:12:55.725764 6475 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:12:55.725766 6475 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:12:57Z\\\",\\\"message\\\":\\\"openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 11:12:56.988067 6769 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-gbl2h in node crc\\\\nI0226 11:12:56.988312 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-gbl2h after 0 failed attempt(s)\\\\nI0226 11:12:56.988318 6769 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gbl2h\\\\nI0226 11:12:56.988176 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0226 11:12:56.988330 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0226 11:12:56.988335 6769 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0226 11:12:56.988194 6769 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988346 6769 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988353 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.094352 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.107234 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.122722 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:12:58 crc kubenswrapper[4699]: E0226 11:12:58.122910 4699 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:12:58 crc kubenswrapper[4699]: E0226 11:12:58.122992 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs podName:6956c039-cf77-429b-8f7f-f93ba195d321 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:02.122972441 +0000 UTC m=+127.933798875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs") pod "network-metrics-daemon-v5ctv" (UID: "6956c039-cf77-429b-8f7f-f93ba195d321") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.260280 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:12:58 crc kubenswrapper[4699]: E0226 11:12:58.260431 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.260458 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.260527 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:12:58 crc kubenswrapper[4699]: E0226 11:12:58.260615 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:12:58 crc kubenswrapper[4699]: E0226 11:12:58.260690 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.260916 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:12:58 crc kubenswrapper[4699]: E0226 11:12:58.261061 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.880006 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovnkube-controller/1.log" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.886834 4699 scope.go:117] "RemoveContainer" containerID="46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396" Feb 26 11:12:58 crc kubenswrapper[4699]: E0226 11:12:58.887201 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cw6vx_openshift-ovn-kubernetes(cd12b2df-7af6-45bc-88e7-d5e5e6451e65)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.910949 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.924235 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.947366 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.959907 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.973689 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.985999 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:58 crc kubenswrapper[4699]: I0226 11:12:58.999824 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:58Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:59 crc kubenswrapper[4699]: I0226 11:12:59.013861 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:59Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:59 crc kubenswrapper[4699]: I0226 11:12:59.030989 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:59Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:59 crc kubenswrapper[4699]: I0226 11:12:59.043898 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:59Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:59 crc kubenswrapper[4699]: I0226 11:12:59.068738 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:12:57Z\\\",\\\"message\\\":\\\"openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 11:12:56.988067 6769 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-gbl2h in node crc\\\\nI0226 11:12:56.988312 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-gbl2h after 0 failed attempt(s)\\\\nI0226 11:12:56.988318 6769 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gbl2h\\\\nI0226 11:12:56.988176 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0226 11:12:56.988330 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0226 11:12:56.988335 6769 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0226 11:12:56.988194 6769 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988346 6769 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988353 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cw6vx_openshift-ovn-kubernetes(cd12b2df-7af6-45bc-88e7-d5e5e6451e65)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:59Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:59 crc kubenswrapper[4699]: I0226 11:12:59.084334 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:59Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:59 crc kubenswrapper[4699]: I0226 11:12:59.105177 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:59Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:59 crc kubenswrapper[4699]: I0226 11:12:59.118759 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:59Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:59 crc kubenswrapper[4699]: I0226 11:12:59.135953 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:59Z is after 2025-08-24T17:21:41Z" Feb 26 11:12:59 crc kubenswrapper[4699]: I0226 11:12:59.147755 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:12:59Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:00 crc kubenswrapper[4699]: I0226 11:13:00.260177 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:00 crc kubenswrapper[4699]: I0226 11:13:00.260260 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:00 crc kubenswrapper[4699]: I0226 11:13:00.260272 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:00 crc kubenswrapper[4699]: E0226 11:13:00.260328 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:00 crc kubenswrapper[4699]: E0226 11:13:00.260481 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:00 crc kubenswrapper[4699]: I0226 11:13:00.260519 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:00 crc kubenswrapper[4699]: E0226 11:13:00.260803 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:00 crc kubenswrapper[4699]: E0226 11:13:00.260554 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:01 crc kubenswrapper[4699]: E0226 11:13:01.559577 4699 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:13:02 crc kubenswrapper[4699]: I0226 11:13:02.171854 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:02 crc kubenswrapper[4699]: E0226 11:13:02.172043 4699 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:13:02 crc kubenswrapper[4699]: E0226 11:13:02.172201 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs podName:6956c039-cf77-429b-8f7f-f93ba195d321 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:10.172181363 +0000 UTC m=+135.983007797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs") pod "network-metrics-daemon-v5ctv" (UID: "6956c039-cf77-429b-8f7f-f93ba195d321") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:13:02 crc kubenswrapper[4699]: I0226 11:13:02.260715 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:02 crc kubenswrapper[4699]: I0226 11:13:02.260750 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:02 crc kubenswrapper[4699]: E0226 11:13:02.260840 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:02 crc kubenswrapper[4699]: I0226 11:13:02.260715 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:02 crc kubenswrapper[4699]: E0226 11:13:02.260967 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:02 crc kubenswrapper[4699]: E0226 11:13:02.261029 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:02 crc kubenswrapper[4699]: I0226 11:13:02.261051 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:02 crc kubenswrapper[4699]: E0226 11:13:02.261234 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:03 crc kubenswrapper[4699]: I0226 11:13:03.268809 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.260414 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.260514 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.260535 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.260584 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:04 crc kubenswrapper[4699]: E0226 11:13:04.260659 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:04 crc kubenswrapper[4699]: E0226 11:13:04.260810 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:04 crc kubenswrapper[4699]: E0226 11:13:04.260997 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:04 crc kubenswrapper[4699]: E0226 11:13:04.261046 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.540388 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.540433 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.540446 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.540461 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.540473 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:04Z","lastTransitionTime":"2026-02-26T11:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:04 crc kubenswrapper[4699]: E0226 11:13:04.553455 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.558086 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.558135 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.558146 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.558163 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.558177 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:04Z","lastTransitionTime":"2026-02-26T11:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:04 crc kubenswrapper[4699]: E0226 11:13:04.573324 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.576739 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.576776 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.576787 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.576804 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.576816 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:04Z","lastTransitionTime":"2026-02-26T11:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:04 crc kubenswrapper[4699]: E0226 11:13:04.592387 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.595533 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.595568 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.595582 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.595604 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.595615 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:04Z","lastTransitionTime":"2026-02-26T11:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:04 crc kubenswrapper[4699]: E0226 11:13:04.608842 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.612839 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.612902 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.612915 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.612934 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:04 crc kubenswrapper[4699]: I0226 11:13:04.612948 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:04Z","lastTransitionTime":"2026-02-26T11:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:04 crc kubenswrapper[4699]: E0226 11:13:04.625635 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:04 crc kubenswrapper[4699]: E0226 11:13:04.625824 4699 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.260459 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:06 crc kubenswrapper[4699]: E0226 11:13:06.260558 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.260725 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:06 crc kubenswrapper[4699]: E0226 11:13:06.260769 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.260872 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:06 crc kubenswrapper[4699]: E0226 11:13:06.260915 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.261056 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:06 crc kubenswrapper[4699]: E0226 11:13:06.261135 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.277947 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.290848 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ad0a6b-b6ae-493c-9d3c-22b171765374\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e3cbce38225d1f5dc3e506a9bc813f8b1e76e2f3315a83ce984c0b51ad0b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a914b4d63550a877b25fc802fe067357dc144ba29c8d4cff98b2d3d1c10baa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e0c4b60e5b4a7561832379bb034e4202ac67058677a2e7e362703e8f5952b83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.306290 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.321649 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.333053 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.352014 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.363201 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.376264 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.388064 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.400452 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.410582 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.421603 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.435280 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.449087 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.459636 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.479403 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:12:57Z\\\",\\\"message\\\":\\\"openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 11:12:56.988067 6769 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-gbl2h in node crc\\\\nI0226 11:12:56.988312 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-gbl2h after 0 failed attempt(s)\\\\nI0226 11:12:56.988318 6769 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gbl2h\\\\nI0226 11:12:56.988176 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0226 11:12:56.988330 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0226 11:12:56.988335 6769 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0226 11:12:56.988194 6769 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988346 6769 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988353 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cw6vx_openshift-ovn-kubernetes(cd12b2df-7af6-45bc-88e7-d5e5e6451e65)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: I0226 11:13:06.490472 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:06 crc kubenswrapper[4699]: E0226 11:13:06.560091 4699 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.714804 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.746455 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:12:57Z\\\",\\\"message\\\":\\\"openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 11:12:56.988067 6769 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-gbl2h in node crc\\\\nI0226 11:12:56.988312 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-gbl2h after 0 failed attempt(s)\\\\nI0226 11:12:56.988318 6769 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gbl2h\\\\nI0226 11:12:56.988176 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0226 11:12:56.988330 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0226 11:12:56.988335 6769 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0226 11:12:56.988194 6769 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988346 6769 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988353 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cw6vx_openshift-ovn-kubernetes(cd12b2df-7af6-45bc-88e7-d5e5e6451e65)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.760502 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.776778 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.791648 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.806946 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.821264 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.833330 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.848283 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.862231 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ad0a6b-b6ae-493c-9d3c-22b171765374\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e3cbce38225d1f5dc3e506a9bc813f8b1e76e2f3315a83ce984c0b51ad0b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a914b4d63550a877b25fc802fe067357dc144ba29c8d4cff98b2d3d1c10baa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e0c4b60e5b4a7561832379bb034e4202ac67058677a2e7e362703e8f5952b83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.896612 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.925424 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.946677 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.969960 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:07 crc kubenswrapper[4699]: I0226 11:13:07.986956 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:07Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:08 crc kubenswrapper[4699]: I0226 11:13:08.007931 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:08Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:08 crc kubenswrapper[4699]: I0226 11:13:08.033292 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:08Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:08 crc kubenswrapper[4699]: I0226 11:13:08.048639 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:08Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:08 crc kubenswrapper[4699]: I0226 11:13:08.259752 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:08 crc kubenswrapper[4699]: I0226 11:13:08.259781 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:08 crc kubenswrapper[4699]: I0226 11:13:08.259868 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:08 crc kubenswrapper[4699]: E0226 11:13:08.259936 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:08 crc kubenswrapper[4699]: I0226 11:13:08.260024 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:08 crc kubenswrapper[4699]: E0226 11:13:08.260085 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:08 crc kubenswrapper[4699]: E0226 11:13:08.260280 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:08 crc kubenswrapper[4699]: E0226 11:13:08.260374 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:10 crc kubenswrapper[4699]: I0226 11:13:10.260397 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:10 crc kubenswrapper[4699]: I0226 11:13:10.260434 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:10 crc kubenswrapper[4699]: I0226 11:13:10.260397 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:10 crc kubenswrapper[4699]: E0226 11:13:10.260554 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:10 crc kubenswrapper[4699]: I0226 11:13:10.260583 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:10 crc kubenswrapper[4699]: E0226 11:13:10.260664 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:10 crc kubenswrapper[4699]: E0226 11:13:10.260737 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:10 crc kubenswrapper[4699]: E0226 11:13:10.260838 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:10 crc kubenswrapper[4699]: I0226 11:13:10.263276 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:10 crc kubenswrapper[4699]: E0226 11:13:10.263450 4699 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:13:10 crc kubenswrapper[4699]: E0226 11:13:10.263571 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs podName:6956c039-cf77-429b-8f7f-f93ba195d321 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:26.263542202 +0000 UTC m=+152.074368826 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs") pod "network-metrics-daemon-v5ctv" (UID: "6956c039-cf77-429b-8f7f-f93ba195d321") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:13:11 crc kubenswrapper[4699]: E0226 11:13:11.561495 4699 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:13:12 crc kubenswrapper[4699]: I0226 11:13:12.259749 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:12 crc kubenswrapper[4699]: I0226 11:13:12.259749 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:12 crc kubenswrapper[4699]: E0226 11:13:12.260216 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:12 crc kubenswrapper[4699]: I0226 11:13:12.259792 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:12 crc kubenswrapper[4699]: I0226 11:13:12.259749 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:12 crc kubenswrapper[4699]: E0226 11:13:12.260336 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:12 crc kubenswrapper[4699]: E0226 11:13:12.260401 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:12 crc kubenswrapper[4699]: E0226 11:13:12.260223 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:13 crc kubenswrapper[4699]: I0226 11:13:13.261098 4699 scope.go:117] "RemoveContainer" containerID="46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396" Feb 26 11:13:13 crc kubenswrapper[4699]: I0226 11:13:13.971287 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovnkube-controller/1.log" Feb 26 11:13:13 crc kubenswrapper[4699]: I0226 11:13:13.974359 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0"} Feb 26 11:13:13 crc kubenswrapper[4699]: I0226 11:13:13.974784 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:13:13 crc kubenswrapper[4699]: I0226 11:13:13.988999 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:13Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.003789 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.018626 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.033556 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.044765 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.064463 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:12:57Z\\\",\\\"message\\\":\\\"openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 11:12:56.988067 6769 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-gbl2h in node crc\\\\nI0226 11:12:56.988312 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-gbl2h after 0 failed attempt(s)\\\\nI0226 11:12:56.988318 6769 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gbl2h\\\\nI0226 11:12:56.988176 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0226 11:12:56.988330 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0226 11:12:56.988335 6769 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0226 11:12:56.988194 6769 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988346 6769 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988353 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:13:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.080073 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.094975 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.107414 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ad0a6b-b6ae-493c-9d3c-22b171765374\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e3cbce38225d1f5dc3e506a9bc813f8b1e76e2f3315a83ce984c0b51ad0b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a914b4d63550a877b25fc802fe067357dc144ba29c8d4cff98b2d3d1c10baa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e0c4b60e5b4a7561832379bb034e4202ac67058677a2e7e362703e8f5952b83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.120334 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.137831 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.153348 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.175363 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.204351 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.222029 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.249259 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.259883 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.259907 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.259946 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:14 crc kubenswrapper[4699]: E0226 11:13:14.260031 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.260168 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:14 crc kubenswrapper[4699]: E0226 11:13:14.260162 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:14 crc kubenswrapper[4699]: E0226 11:13:14.260245 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:14 crc kubenswrapper[4699]: E0226 11:13:14.260365 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.265446 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.646063 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.646138 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.646152 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.646168 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.646178 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:14Z","lastTransitionTime":"2026-02-26T11:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:14 crc kubenswrapper[4699]: E0226 11:13:14.658993 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.662571 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.662628 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.662641 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.662659 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.662671 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:14Z","lastTransitionTime":"2026-02-26T11:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:14 crc kubenswrapper[4699]: E0226 11:13:14.674853 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.678456 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.678489 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.678499 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.678513 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.678522 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:14Z","lastTransitionTime":"2026-02-26T11:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:14 crc kubenswrapper[4699]: E0226 11:13:14.693933 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.697704 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.697746 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.697758 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.697775 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.697787 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:14Z","lastTransitionTime":"2026-02-26T11:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:14 crc kubenswrapper[4699]: E0226 11:13:14.709961 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.714518 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.714562 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.714572 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.714592 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.714602 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:14Z","lastTransitionTime":"2026-02-26T11:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:14 crc kubenswrapper[4699]: E0226 11:13:14.726730 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:14 crc kubenswrapper[4699]: E0226 11:13:14.726891 4699 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.979370 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovnkube-controller/2.log" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.980456 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovnkube-controller/1.log" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.983581 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0" exitCode=1 Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.983680 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0"} Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.983784 4699 scope.go:117] "RemoveContainer" containerID="46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396" Feb 26 11:13:14 crc kubenswrapper[4699]: I0226 11:13:14.984510 4699 scope.go:117] "RemoveContainer" containerID="063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0" Feb 26 11:13:14 crc kubenswrapper[4699]: E0226 11:13:14.984738 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cw6vx_openshift-ovn-kubernetes(cd12b2df-7af6-45bc-88e7-d5e5e6451e65)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.008696 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46aba02a00b28d54085033583368e3a91e4a1f042cab3b0abc3885f717555396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:12:57Z\\\",\\\"message\\\":\\\"openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 11:12:56.988067 6769 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-gbl2h in node crc\\\\nI0226 11:12:56.988312 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-gbl2h after 0 failed attempt(s)\\\\nI0226 11:12:56.988318 6769 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-gbl2h\\\\nI0226 11:12:56.988176 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0226 11:12:56.988330 6769 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0226 11:12:56.988335 6769 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0226 11:12:56.988194 6769 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988346 6769 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0226 11:12:56.988353 6769 ovn.go:134] Ensuring zone local for Pod openshift-network-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"ernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:13:14.559644 7027 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.559182 7027 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.560083 7027 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.560811 7027 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0226 11:13:14.560836 7027 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0226 11:13:14.561218 7027 factory.go:656] Stopping watch factory\\\\nI0226 11:13:14.561367 7027 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.561415 7027 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:13:14.562003 7027 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:13:14.562038 7027 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 11:13:14.562134 7027 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:13:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.021274 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.037359 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.054862 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.070164 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.084442 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.098620 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.115434 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.131581 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ad0a6b-b6ae-493c-9d3c-22b171765374\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e3cbce38225d1f5dc3e506a9bc813f8b1e76e2f3315a83ce984c0b51ad0b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a914b4d63550a877b25fc802fe067357dc144ba29c8d4cff98b2d3d1c10baa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e0c4b60e5b4a7561832379bb034e4202ac67058677a2e7e362703e8f5952b83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.149081 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.162595 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.174752 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.194626 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.210828 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.226649 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.241693 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.256777 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.989501 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovnkube-controller/2.log" Feb 26 11:13:15 crc kubenswrapper[4699]: I0226 11:13:15.993378 4699 scope.go:117] "RemoveContainer" containerID="063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0" Feb 26 11:13:15 crc kubenswrapper[4699]: E0226 11:13:15.993628 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cw6vx_openshift-ovn-kubernetes(cd12b2df-7af6-45bc-88e7-d5e5e6451e65)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.005863 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.018530 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.034732 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.048473 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.072250 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"ernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:13:14.559644 7027 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.559182 7027 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.560083 7027 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.560811 7027 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0226 11:13:14.560836 7027 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0226 11:13:14.561218 7027 factory.go:656] Stopping watch factory\\\\nI0226 11:13:14.561367 7027 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.561415 7027 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:13:14.562003 7027 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:13:14.562038 7027 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 11:13:14.562134 7027 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:13:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cw6vx_openshift-ovn-kubernetes(cd12b2df-7af6-45bc-88e7-d5e5e6451e65)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.084042 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.098664 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.110706 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.121415 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.135525 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.149159 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.169136 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.184508 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ad0a6b-b6ae-493c-9d3c-22b171765374\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e3cbce38225d1f5dc3e506a9bc813f8b1e76e2f3315a83ce984c0b51ad0b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a914b4d63550a877b25fc802fe067357dc144ba29c8d4cff98b2d3d1c10baa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e0c4b60e5b4a7561832379bb034e4202ac67058677a2e7e362703e8f5952b83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.198193 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.214584 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.229893 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.252996 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.260294 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.260357 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:16 crc kubenswrapper[4699]: E0226 11:13:16.260555 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.260679 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.260691 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:16 crc kubenswrapper[4699]: E0226 11:13:16.260820 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:16 crc kubenswrapper[4699]: E0226 11:13:16.260947 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:16 crc kubenswrapper[4699]: E0226 11:13:16.261100 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.284968 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.299989 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.317529 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.329824 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.345575 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.357978 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.368298 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.379874 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.392522 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.406837 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.418245 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.438331 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"ernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:13:14.559644 7027 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.559182 7027 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.560083 7027 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.560811 7027 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0226 11:13:14.560836 7027 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0226 11:13:14.561218 7027 factory.go:656] Stopping watch factory\\\\nI0226 11:13:14.561367 7027 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.561415 7027 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:13:14.562003 7027 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:13:14.562038 7027 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 11:13:14.562134 7027 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:13:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cw6vx_openshift-ovn-kubernetes(cd12b2df-7af6-45bc-88e7-d5e5e6451e65)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.453971 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.466810 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ad0a6b-b6ae-493c-9d3c-22b171765374\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e3cbce38225d1f5dc3e506a9bc813f8b1e76e2f3315a83ce984c0b51ad0b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a914b4d63550a877b25fc802fe067357dc144ba29c8d4cff98b2d3d1c10baa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e0c4b60e5b4a7561832379bb034e4202ac67058677a2e7e362703e8f5952b83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.479381 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.495579 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: I0226 11:13:16.507777 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:16 crc kubenswrapper[4699]: E0226 11:13:16.561894 4699 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:13:18 crc kubenswrapper[4699]: I0226 11:13:18.259790 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:18 crc kubenswrapper[4699]: I0226 11:13:18.259877 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:18 crc kubenswrapper[4699]: I0226 11:13:18.259931 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:18 crc kubenswrapper[4699]: E0226 11:13:18.260009 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:18 crc kubenswrapper[4699]: I0226 11:13:18.260067 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:18 crc kubenswrapper[4699]: E0226 11:13:18.260150 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:18 crc kubenswrapper[4699]: E0226 11:13:18.260195 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:18 crc kubenswrapper[4699]: E0226 11:13:18.260257 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:20 crc kubenswrapper[4699]: I0226 11:13:20.259919 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:20 crc kubenswrapper[4699]: I0226 11:13:20.259948 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:20 crc kubenswrapper[4699]: I0226 11:13:20.260434 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:20 crc kubenswrapper[4699]: E0226 11:13:20.260618 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:20 crc kubenswrapper[4699]: I0226 11:13:20.260705 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:20 crc kubenswrapper[4699]: E0226 11:13:20.260807 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:20 crc kubenswrapper[4699]: E0226 11:13:20.260876 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:20 crc kubenswrapper[4699]: E0226 11:13:20.260988 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:20 crc kubenswrapper[4699]: I0226 11:13:20.274434 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 26 11:13:21 crc kubenswrapper[4699]: E0226 11:13:21.563050 4699 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:13:22 crc kubenswrapper[4699]: I0226 11:13:22.260011 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:22 crc kubenswrapper[4699]: I0226 11:13:22.260094 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:22 crc kubenswrapper[4699]: I0226 11:13:22.260050 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:22 crc kubenswrapper[4699]: I0226 11:13:22.260182 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:22 crc kubenswrapper[4699]: E0226 11:13:22.260202 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:22 crc kubenswrapper[4699]: E0226 11:13:22.260300 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:22 crc kubenswrapper[4699]: E0226 11:13:22.260393 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:22 crc kubenswrapper[4699]: E0226 11:13:22.260504 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.218909 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.219176 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:28.219108293 +0000 UTC m=+214.029934777 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.260729 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.260776 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.260872 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.260941 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.260871 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.261017 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.261178 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.261240 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.319789 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.319850 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.319883 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.319903 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.319910 4699 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.319958 4699 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.319982 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:14:28.319964436 +0000 UTC m=+214.130790870 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.320002 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:14:28.319993097 +0000 UTC m=+214.130819531 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.320025 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.320048 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.320062 4699 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.320104 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:14:28.320093991 +0000 UTC m=+214.130920425 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.320223 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.320238 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.320247 4699 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.320281 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:14:28.320274397 +0000 UTC m=+214.131100831 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.768832 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.768857 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.768865 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.768878 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.768887 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:24Z","lastTransitionTime":"2026-02-26T11:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.780638 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.783892 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.783920 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.783930 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.783944 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.783954 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:24Z","lastTransitionTime":"2026-02-26T11:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.795532 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.799777 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.799812 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.799822 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.799838 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.799851 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:24Z","lastTransitionTime":"2026-02-26T11:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.810964 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.818360 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.818430 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.818506 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.818531 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.818548 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:24Z","lastTransitionTime":"2026-02-26T11:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.832423 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.836269 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.836319 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.836328 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.836342 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:24 crc kubenswrapper[4699]: I0226 11:13:24.836353 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:24Z","lastTransitionTime":"2026-02-26T11:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.847750 4699 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f74e239-3726-44b7-b791-47b33a2699be\\\",\\\"systemUUID\\\":\\\"e4404db5-04f3-42e6-90eb-21c35124a700\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:24 crc kubenswrapper[4699]: E0226 11:13:24.847861 4699 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.260198 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:26 crc kubenswrapper[4699]: E0226 11:13:26.260339 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.260410 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.260541 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:26 crc kubenswrapper[4699]: E0226 11:13:26.260544 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.260564 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:26 crc kubenswrapper[4699]: E0226 11:13:26.260820 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:26 crc kubenswrapper[4699]: E0226 11:13:26.260839 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.276299 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee33485d-044d-4356-a626-df5e4625a4f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:12:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 11:12:07.177508 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 11:12:07.178221 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:12:07.179410 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3427621089/tls.crt::/tmp/serving-cert-3427621089/tls.key\\\\\\\"\\\\nI0226 11:12:07.693355 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 11:12:07.699737 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 11:12:07.699770 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 11:12:07.699794 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 11:12:07.699801 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 11:12:07.704076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 11:12:07.704163 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 11:12:07.704178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 11:12:07.704182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 11:12:07.704187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 11:12:07.704192 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0226 11:12:07.704180 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0226 11:12:07.707201 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.293511 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ad0a6b-b6ae-493c-9d3c-22b171765374\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e3cbce38225d1f5dc3e506a9bc813f8b1e76e2f3315a83ce984c0b51ad0b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a914b4d63550a877b25fc802fe067357dc144ba29c8d4cff98b2d3d1c10baa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e0c4b60e5b4a7561832379bb034e4202ac67058677a2e7e362703e8f5952b83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b818c7e0d202c03b73fbaacb76a36fdf224d3674ac77a316eb94ce14a45836\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.309442 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.323641 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6005eb1355f9bc1dc9d5c6b4aa90bb33d74c16bd28f074c3266aea1b315fb1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6d74fea843381a23d5cabacf31e931a0dd98b4a11abcc02d3df951692965534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.338060 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6956c039-cf77-429b-8f7f-f93ba195d321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phthm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v5ctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.339552 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:26 crc kubenswrapper[4699]: E0226 11:13:26.339702 4699 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:13:26 crc kubenswrapper[4699]: E0226 11:13:26.339756 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs podName:6956c039-cf77-429b-8f7f-f93ba195d321 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.339743394 +0000 UTC m=+184.150569828 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs") pod "network-metrics-daemon-v5ctv" (UID: "6956c039-cf77-429b-8f7f-f93ba195d321") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.359642 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb09c7e7-a19a-4c5e-ace4-584f63f71fe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52919194c48ba8d2d5312f534646aa43d41aba48324f4aca79e502ae157df536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0dff35454006f3c8277e6b0f96fb676434096cede813fbd68783b3982e4a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb55f15e1f01b945bdfef17406997d2a9d7835fd4dddd59e5a6661b180f3ca22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0dfe169d520e86bf6d1f1712a91c38a5b4d90661868f45c203e8b3db917ebf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3059e4195de5c0d992c4854fd6f8f83e494688b7c11b5b29835bbda22f4050b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cef5999e9d480989b990803f3134d541b78f47eb97d312682d198507f853a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80a98e152c962355bb43eb7cd21f431853a8b19bb9e1cac28fe48a2bc86bd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a604176804dbcfad4c0f5cf5945c679818a0453a047091115fa9f8a8e573acb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.371864 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a028f97a63d8d7fa34c1029fb62aea26be554f157d879fdcf7553807a75d70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.386371 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95d160b5-697e-42fa-8cd0-8b7b337820c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff9ad002d597d66f0897adccd2319697d917dad946cc1e3fbb6b419397032e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8c2167e8d3e5078f58aa53ee8861687c577adbbc475e0fc2e340abf6afcc01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83232777cdd898271757bffefc3c36b87e28390feea956025cc67851b2e4ea61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677566e01c477fea7d7ca41804df01d70b206b79f0acacc945f3dad5dc76bb3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ffcec6c52eae876bc972074bff0b05a6d42620a9c6f3f4b97e2f4ff904e3e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4d3547a1ae5e11f679ad08111ac12370edea14dddb45b168ca4b26295f3a38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df39bbf7f7afc2aafedfa54a04af62da4a9d363986585fd2eac61f4a1d4ed34c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5tczc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tfp9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.398788 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd0f846-a702-4f37-a862-f620cb23e7bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1982fcdf20248152190117fa9699973c4f989d3a197f947e71f8066f8f16218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dd2547b7a142b99f0170df619d1b1750e287507d7f8b8bad3299a4a3d23849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nrn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.412239 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9106aba-3c7b-4722-a051-a7fe53d9b619\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55229c06747f2b5d388af00f4d2aa770f2786ea7f8015579fb05381eee44235f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4003edefbd2aac9a706e0d56e2791c34c4bc9a820e5cda0ab4cf3172fc4f5c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:11:32Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 11:11:02.135153 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 11:11:02.147322 1 observer_polling.go:159] Starting file observer\\\\nI0226 11:11:02.237317 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 11:11:02.242318 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 11:11:32.466165 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbdf473c08abfc93be6ee643eb86aebdaf8cae59cbe4c844b800862b15f7434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0bc27153e659e049d639cf7b8963c1485433aed35f5efe5e88f1cc275d92a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1cda06107373ef4a7be9d68d9a39ed9f7351913e1deb1bd9e7d825d93ee54a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:10:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.432389 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2k6b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ce77d1-5287-4674-aeda-810070efbb29\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6g8bl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2k6b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.446802 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8a5817d51e63aa7fe3012de90c758002d8db1d9d829509434f8925c373f0b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-28p79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.458965 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gs59q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbc7948-b89f-46f1-8ebd-c5406fee4e30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b10028b125687697ebd614eb89d3715bc4913fd8dabed0a60e56572d469f5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8kzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gs59q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.474382 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.490716 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.506969 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250276015ba7d961b83bd0201443683d6f3558be50759ff714cff05dd2cc6393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.519567 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gbl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db105b7b-9325-4f20-a760-06c045ea844f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed93026c595f2e3015e02073b545216e79ac8e68085a3b2ccead1a25e6c0afce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9n6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gbl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: I0226 11:13:26.543202 4699 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:12:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:13:14Z\\\",\\\"message\\\":\\\"ernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:13:14.559644 7027 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.559182 7027 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.560083 7027 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.560811 7027 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0226 11:13:14.560836 7027 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0226 11:13:14.561218 7027 factory.go:656] Stopping watch factory\\\\nI0226 11:13:14.561367 7027 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:13:14.561415 7027 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:13:14.562003 7027 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:13:14.562038 7027 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 11:13:14.562134 7027 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:13:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cw6vx_openshift-ovn-kubernetes(cd12b2df-7af6-45bc-88e7-d5e5e6451e65)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnmg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:12:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cw6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:13:26Z is after 2025-08-24T17:21:41Z" Feb 26 11:13:26 crc kubenswrapper[4699]: E0226 11:13:26.563568 4699 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:13:28 crc kubenswrapper[4699]: I0226 11:13:28.261420 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:28 crc kubenswrapper[4699]: I0226 11:13:28.261413 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:28 crc kubenswrapper[4699]: I0226 11:13:28.261500 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:28 crc kubenswrapper[4699]: I0226 11:13:28.261508 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:28 crc kubenswrapper[4699]: E0226 11:13:28.261596 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:28 crc kubenswrapper[4699]: E0226 11:13:28.261718 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:28 crc kubenswrapper[4699]: E0226 11:13:28.262218 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:28 crc kubenswrapper[4699]: E0226 11:13:28.262282 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:28 crc kubenswrapper[4699]: I0226 11:13:28.262421 4699 scope.go:117] "RemoveContainer" containerID="063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0" Feb 26 11:13:28 crc kubenswrapper[4699]: E0226 11:13:28.262665 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cw6vx_openshift-ovn-kubernetes(cd12b2df-7af6-45bc-88e7-d5e5e6451e65)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" Feb 26 11:13:28 crc kubenswrapper[4699]: I0226 11:13:28.274252 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 26 11:13:30 crc kubenswrapper[4699]: I0226 11:13:30.260348 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:30 crc kubenswrapper[4699]: I0226 11:13:30.260430 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:30 crc kubenswrapper[4699]: I0226 11:13:30.260500 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:30 crc kubenswrapper[4699]: E0226 11:13:30.260503 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:30 crc kubenswrapper[4699]: E0226 11:13:30.260578 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:30 crc kubenswrapper[4699]: I0226 11:13:30.260610 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:30 crc kubenswrapper[4699]: E0226 11:13:30.260662 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:30 crc kubenswrapper[4699]: E0226 11:13:30.260724 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:31 crc kubenswrapper[4699]: E0226 11:13:31.565250 4699 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.051743 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2k6b7_32ce77d1-5287-4674-aeda-810070efbb29/kube-multus/0.log" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.051796 4699 generic.go:334] "Generic (PLEG): container finished" podID="32ce77d1-5287-4674-aeda-810070efbb29" containerID="b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3" exitCode=1 Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.051835 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2k6b7" event={"ID":"32ce77d1-5287-4674-aeda-810070efbb29","Type":"ContainerDied","Data":"b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3"} Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.052268 4699 scope.go:117] "RemoveContainer" containerID="b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.156435 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-gbl2h" podStartSLOduration=106.156419091 podStartE2EDuration="1m46.156419091s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:32.130720903 +0000 UTC m=+157.941547347" watchObservedRunningTime="2026-02-26 11:13:32.156419091 +0000 UTC m=+157.967245525" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.170016 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gs59q" podStartSLOduration=106.169996966 podStartE2EDuration="1m46.169996966s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:32.169779508 +0000 UTC m=+157.980605952" watchObservedRunningTime="2026-02-26 11:13:32.169996966 +0000 UTC m=+157.980823400" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.209496 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=29.209434765 podStartE2EDuration="29.209434765s" podCreationTimestamp="2026-02-26 11:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:32.208218062 +0000 UTC m=+158.019044506" watchObservedRunningTime="2026-02-26 11:13:32.209434765 +0000 UTC m=+158.020261189" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.211171 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.211154217 podStartE2EDuration="1m8.211154217s" podCreationTimestamp="2026-02-26 11:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:32.190201658 +0000 UTC m=+158.001028112" watchObservedRunningTime="2026-02-26 11:13:32.211154217 +0000 UTC m=+158.021980671" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.261236 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.261311 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.261319 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.261236 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:32 crc kubenswrapper[4699]: E0226 11:13:32.261425 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:32 crc kubenswrapper[4699]: E0226 11:13:32.261614 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:32 crc kubenswrapper[4699]: E0226 11:13:32.261633 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:32 crc kubenswrapper[4699]: E0226 11:13:32.261754 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.295242 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.295223131 podStartE2EDuration="4.295223131s" podCreationTimestamp="2026-02-26 11:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:32.294476734 +0000 UTC m=+158.105303168" watchObservedRunningTime="2026-02-26 11:13:32.295223131 +0000 UTC m=+158.106049585" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.334603 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=47.334581417 podStartE2EDuration="47.334581417s" podCreationTimestamp="2026-02-26 11:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:32.319717336 +0000 UTC m=+158.130543770" watchObservedRunningTime="2026-02-26 11:13:32.334581417 +0000 UTC m=+158.145407851" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.354337 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-tfp9h" podStartSLOduration=106.354314832 podStartE2EDuration="1m46.354314832s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:32.353614787 +0000 UTC m=+158.164441241" watchObservedRunningTime="2026-02-26 11:13:32.354314832 +0000 UTC m=+158.165141276" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.390094 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=12.39007033 podStartE2EDuration="12.39007033s" podCreationTimestamp="2026-02-26 11:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:32.388980891 +0000 UTC m=+158.199807335" watchObservedRunningTime="2026-02-26 11:13:32.39007033 +0000 UTC m=+158.200896764" Feb 26 11:13:32 crc kubenswrapper[4699]: I0226 11:13:32.390465 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nrn4" podStartSLOduration=105.390457533 podStartE2EDuration="1m45.390457533s" podCreationTimestamp="2026-02-26 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:32.370274522 +0000 UTC m=+158.181100976" watchObservedRunningTime="2026-02-26 11:13:32.390457533 +0000 UTC m=+158.201283967" Feb 26 11:13:33 crc kubenswrapper[4699]: I0226 11:13:33.060317 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2k6b7_32ce77d1-5287-4674-aeda-810070efbb29/kube-multus/0.log" Feb 26 11:13:33 crc kubenswrapper[4699]: I0226 11:13:33.060373 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2k6b7" event={"ID":"32ce77d1-5287-4674-aeda-810070efbb29","Type":"ContainerStarted","Data":"143a97abf6e80c5d27a74181526e16c9b98e3306181c3568beb75b7c14de4b31"} Feb 26 11:13:33 crc kubenswrapper[4699]: I0226 11:13:33.075938 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podStartSLOduration=107.075921846 podStartE2EDuration="1m47.075921846s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:32.423325468 +0000 UTC m=+158.234151902" watchObservedRunningTime="2026-02-26 11:13:33.075921846 +0000 UTC m=+158.886748290" Feb 26 11:13:33 crc kubenswrapper[4699]: I0226 11:13:33.076156 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2k6b7" podStartSLOduration=107.076151555 podStartE2EDuration="1m47.076151555s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:33.075565594 +0000 UTC m=+158.886392048" watchObservedRunningTime="2026-02-26 11:13:33.076151555 +0000 UTC m=+158.886977979" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.260026 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.260087 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.260087 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.260237 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:34 crc kubenswrapper[4699]: E0226 11:13:34.260227 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:34 crc kubenswrapper[4699]: E0226 11:13:34.260363 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:34 crc kubenswrapper[4699]: E0226 11:13:34.260422 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:34 crc kubenswrapper[4699]: E0226 11:13:34.260487 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.886524 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.886560 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.886568 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.886581 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.886589 4699 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:13:34Z","lastTransitionTime":"2026-02-26T11:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.928158 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47"] Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.929056 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.931658 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.932046 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.932277 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.932596 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.933378 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80a20711-23cf-449e-891a-acba8d452c48-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.933484 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/80a20711-23cf-449e-891a-acba8d452c48-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.933557 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a20711-23cf-449e-891a-acba8d452c48-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.933604 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80a20711-23cf-449e-891a-acba8d452c48-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:34 crc kubenswrapper[4699]: I0226 11:13:34.933695 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/80a20711-23cf-449e-891a-acba8d452c48-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.034508 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80a20711-23cf-449e-891a-acba8d452c48-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.034579 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/80a20711-23cf-449e-891a-acba8d452c48-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.034599 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80a20711-23cf-449e-891a-acba8d452c48-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.034624 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/80a20711-23cf-449e-891a-acba8d452c48-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.034651 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a20711-23cf-449e-891a-acba8d452c48-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.034674 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/80a20711-23cf-449e-891a-acba8d452c48-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.034758 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/80a20711-23cf-449e-891a-acba8d452c48-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.035507 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80a20711-23cf-449e-891a-acba8d452c48-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.040897 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a20711-23cf-449e-891a-acba8d452c48-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.054512 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80a20711-23cf-449e-891a-acba8d452c48-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bsj47\" (UID: \"80a20711-23cf-449e-891a-acba8d452c48\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.221732 4699 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.231870 4699 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 11:13:35 crc kubenswrapper[4699]: I0226 11:13:35.241736 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" Feb 26 11:13:36 crc kubenswrapper[4699]: I0226 11:13:36.069307 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" event={"ID":"80a20711-23cf-449e-891a-acba8d452c48","Type":"ContainerStarted","Data":"611ddfebed22732aaf5520081cd27230ed43d015e2fe4c756eb480bed82899bb"} Feb 26 11:13:36 crc kubenswrapper[4699]: I0226 11:13:36.069354 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" event={"ID":"80a20711-23cf-449e-891a-acba8d452c48","Type":"ContainerStarted","Data":"2cc628b383fe90399d2ba5d6f403315a5522ed1fd1b3524410f0cb62790404ae"} Feb 26 11:13:36 crc kubenswrapper[4699]: I0226 11:13:36.083920 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bsj47" podStartSLOduration=110.083900154 podStartE2EDuration="1m50.083900154s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:36.082973231 +0000 UTC m=+161.893799665" watchObservedRunningTime="2026-02-26 11:13:36.083900154 +0000 UTC m=+161.894726588" Feb 26 11:13:36 crc kubenswrapper[4699]: I0226 11:13:36.259718 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:36 crc kubenswrapper[4699]: I0226 11:13:36.262506 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:36 crc kubenswrapper[4699]: I0226 11:13:36.262534 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:36 crc kubenswrapper[4699]: E0226 11:13:36.262511 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:36 crc kubenswrapper[4699]: I0226 11:13:36.262564 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:36 crc kubenswrapper[4699]: E0226 11:13:36.262640 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:36 crc kubenswrapper[4699]: E0226 11:13:36.262702 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:36 crc kubenswrapper[4699]: E0226 11:13:36.262792 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:36 crc kubenswrapper[4699]: E0226 11:13:36.565792 4699 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:13:38 crc kubenswrapper[4699]: I0226 11:13:38.259667 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:38 crc kubenswrapper[4699]: I0226 11:13:38.259730 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:38 crc kubenswrapper[4699]: E0226 11:13:38.259807 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:38 crc kubenswrapper[4699]: I0226 11:13:38.259839 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:38 crc kubenswrapper[4699]: I0226 11:13:38.259858 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:38 crc kubenswrapper[4699]: E0226 11:13:38.259935 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:38 crc kubenswrapper[4699]: E0226 11:13:38.260130 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:38 crc kubenswrapper[4699]: E0226 11:13:38.260357 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:40 crc kubenswrapper[4699]: I0226 11:13:40.260033 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:40 crc kubenswrapper[4699]: I0226 11:13:40.260079 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:40 crc kubenswrapper[4699]: I0226 11:13:40.260047 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:40 crc kubenswrapper[4699]: E0226 11:13:40.260198 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:40 crc kubenswrapper[4699]: I0226 11:13:40.260259 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:40 crc kubenswrapper[4699]: E0226 11:13:40.260334 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:40 crc kubenswrapper[4699]: E0226 11:13:40.260431 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:40 crc kubenswrapper[4699]: E0226 11:13:40.260493 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:41 crc kubenswrapper[4699]: E0226 11:13:41.567330 4699 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:13:42 crc kubenswrapper[4699]: I0226 11:13:42.260699 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:42 crc kubenswrapper[4699]: I0226 11:13:42.260759 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:42 crc kubenswrapper[4699]: I0226 11:13:42.260767 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:42 crc kubenswrapper[4699]: I0226 11:13:42.260713 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:42 crc kubenswrapper[4699]: E0226 11:13:42.260841 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:42 crc kubenswrapper[4699]: E0226 11:13:42.260936 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:42 crc kubenswrapper[4699]: E0226 11:13:42.261027 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:42 crc kubenswrapper[4699]: E0226 11:13:42.261083 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:43 crc kubenswrapper[4699]: I0226 11:13:43.260393 4699 scope.go:117] "RemoveContainer" containerID="063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0" Feb 26 11:13:44 crc kubenswrapper[4699]: I0226 11:13:44.096417 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovnkube-controller/2.log" Feb 26 11:13:44 crc kubenswrapper[4699]: I0226 11:13:44.100260 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerStarted","Data":"674b1ddc9ce52057921afe22948e78b0ac743b734851b7422144e06a6bedf770"} Feb 26 11:13:44 crc kubenswrapper[4699]: I0226 11:13:44.260360 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:44 crc kubenswrapper[4699]: I0226 11:13:44.260410 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:44 crc kubenswrapper[4699]: E0226 11:13:44.260532 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:44 crc kubenswrapper[4699]: I0226 11:13:44.260378 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:44 crc kubenswrapper[4699]: I0226 11:13:44.260603 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:44 crc kubenswrapper[4699]: E0226 11:13:44.260745 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:44 crc kubenswrapper[4699]: E0226 11:13:44.260782 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:44 crc kubenswrapper[4699]: E0226 11:13:44.260840 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:45 crc kubenswrapper[4699]: I0226 11:13:45.103996 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:13:45 crc kubenswrapper[4699]: I0226 11:13:45.133508 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podStartSLOduration=119.133489472 podStartE2EDuration="1m59.133489472s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:45.132405374 +0000 UTC m=+170.943231828" watchObservedRunningTime="2026-02-26 11:13:45.133489472 +0000 UTC m=+170.944315906" Feb 26 11:13:45 crc kubenswrapper[4699]: I0226 11:13:45.386171 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v5ctv"] Feb 26 11:13:45 crc kubenswrapper[4699]: I0226 11:13:45.386308 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:45 crc kubenswrapper[4699]: E0226 11:13:45.386406 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:46 crc kubenswrapper[4699]: I0226 11:13:46.260415 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:46 crc kubenswrapper[4699]: I0226 11:13:46.260614 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:46 crc kubenswrapper[4699]: I0226 11:13:46.260614 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:46 crc kubenswrapper[4699]: E0226 11:13:46.261718 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:46 crc kubenswrapper[4699]: E0226 11:13:46.261811 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:46 crc kubenswrapper[4699]: E0226 11:13:46.261954 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:46 crc kubenswrapper[4699]: E0226 11:13:46.567866 4699 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:13:47 crc kubenswrapper[4699]: I0226 11:13:47.259825 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:47 crc kubenswrapper[4699]: E0226 11:13:47.260029 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:48 crc kubenswrapper[4699]: I0226 11:13:48.260099 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:48 crc kubenswrapper[4699]: I0226 11:13:48.260151 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:48 crc kubenswrapper[4699]: I0226 11:13:48.260185 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:48 crc kubenswrapper[4699]: E0226 11:13:48.260283 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:48 crc kubenswrapper[4699]: E0226 11:13:48.260348 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:48 crc kubenswrapper[4699]: E0226 11:13:48.260423 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:49 crc kubenswrapper[4699]: I0226 11:13:49.260294 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:49 crc kubenswrapper[4699]: E0226 11:13:49.260428 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:50 crc kubenswrapper[4699]: I0226 11:13:50.260176 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:50 crc kubenswrapper[4699]: E0226 11:13:50.260324 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:13:50 crc kubenswrapper[4699]: I0226 11:13:50.260176 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:50 crc kubenswrapper[4699]: I0226 11:13:50.260386 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:50 crc kubenswrapper[4699]: E0226 11:13:50.260451 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:13:50 crc kubenswrapper[4699]: E0226 11:13:50.260534 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:13:51 crc kubenswrapper[4699]: I0226 11:13:51.260517 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:51 crc kubenswrapper[4699]: E0226 11:13:51.260737 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v5ctv" podUID="6956c039-cf77-429b-8f7f-f93ba195d321" Feb 26 11:13:52 crc kubenswrapper[4699]: I0226 11:13:52.259756 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:13:52 crc kubenswrapper[4699]: I0226 11:13:52.259846 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:13:52 crc kubenswrapper[4699]: I0226 11:13:52.260040 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:13:52 crc kubenswrapper[4699]: I0226 11:13:52.264504 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 11:13:52 crc kubenswrapper[4699]: I0226 11:13:52.264827 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 11:13:52 crc kubenswrapper[4699]: I0226 11:13:52.265276 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 11:13:52 crc kubenswrapper[4699]: I0226 11:13:52.265475 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 11:13:53 crc kubenswrapper[4699]: I0226 11:13:53.260321 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:53 crc kubenswrapper[4699]: I0226 11:13:53.262386 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 11:13:53 crc kubenswrapper[4699]: I0226 11:13:53.262696 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 11:13:54 crc kubenswrapper[4699]: I0226 11:13:54.981925 4699 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.024204 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.024597 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qsj62"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.024764 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gsl8w"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.025032 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.025737 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.026044 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.029295 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-22qbz"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.029699 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-f8s5j"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.029903 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.030255 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.030925 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.031463 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.031846 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.032349 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.035532 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.036093 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.036359 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.036743 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.036779 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.038900 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-hnsh7"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.039355 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.039638 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.039668 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.040518 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tcnxt"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.040952 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tcnxt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.049657 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.050744 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.051058 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.051481 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.051750 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.051897 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.051755 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.052144 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.053905 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pw64v"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.054759 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xbpcs"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.055715 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.057699 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.065548 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.065808 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.066280 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.066301 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.066378 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.072740 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.072944 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.074014 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.084331 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.101586 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.118234 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.119627 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.119831 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.120162 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.120303 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.120819 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.121584 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.121733 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.121831 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.121869 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.121963 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122166 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122211 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122250 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122302 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122325 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122362 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122421 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122429 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122453 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122528 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122540 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122595 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122641 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122691 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122645 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122735 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122800 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122843 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122805 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122883 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122757 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.122844 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.123006 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.123043 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.123063 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.123225 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.123402 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.132160 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.132523 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.132654 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.132869 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.135781 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.138882 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.139026 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.139430 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.139511 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.139793 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.140835 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.141092 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.141411 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.144077 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.144231 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.144442 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.144601 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.146308 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149248 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b070a40-85a6-42e6-a1bd-d834170a9c9c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8rd9\" (UID: \"6b070a40-85a6-42e6-a1bd-d834170a9c9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149283 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afa5e1ce-a457-4771-ab06-2654a7801704-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149305 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h5hb\" (UniqueName: \"kubernetes.io/projected/afa5e1ce-a457-4771-ab06-2654a7801704-kube-api-access-7h5hb\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149324 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64bd7009-a06a-43e1-b265-3ea78b5801b9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149355 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcvd8\" (UniqueName: \"kubernetes.io/projected/9c1f6032-b723-4cb3-a93b-73d053eaf822-kube-api-access-vcvd8\") pod \"openshift-controller-manager-operator-756b6f6bc6-2zshh\" (UID: \"9c1f6032-b723-4cb3-a93b-73d053eaf822\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149372 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-audit-dir\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149386 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7knj\" (UniqueName: \"kubernetes.io/projected/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-kube-api-access-q7knj\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149400 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvdh\" (UniqueName: \"kubernetes.io/projected/e6bdcf19-db76-497c-a2fe-a6de38fae724-kube-api-access-wvvdh\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149425 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afa5e1ce-a457-4771-ab06-2654a7801704-etcd-client\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149438 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-image-import-ca\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149454 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61a1581f-5367-4535-99bc-3f28547ab766-metrics-tls\") pod \"dns-operator-744455d44c-xbpcs\" (UID: \"61a1581f-5367-4535-99bc-3f28547ab766\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149473 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1f6032-b723-4cb3-a93b-73d053eaf822-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2zshh\" (UID: \"9c1f6032-b723-4cb3-a93b-73d053eaf822\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149487 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/744aa737-e6c7-4d6b-ba7d-a9479043ad29-serving-cert\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149501 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-serving-cert\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149516 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5d015dd8-56c9-4f61-b133-4951cda91ca5-images\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149530 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-config\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149548 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-config\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149564 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149579 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b070a40-85a6-42e6-a1bd-d834170a9c9c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8rd9\" (UID: \"6b070a40-85a6-42e6-a1bd-d834170a9c9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149591 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afa5e1ce-a457-4771-ab06-2654a7801704-serving-cert\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149607 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-config\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149621 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-etcd-client\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149637 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-etcd-serving-ca\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149651 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-encryption-config\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149667 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-serving-cert\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149681 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149695 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64bd7009-a06a-43e1-b265-3ea78b5801b9-serving-cert\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149710 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149729 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149749 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d015dd8-56c9-4f61-b133-4951cda91ca5-config\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149768 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-service-ca\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149796 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-dir\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149814 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-client-ca\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149828 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjjgq\" (UniqueName: \"kubernetes.io/projected/796e9631-3388-48b1-8675-3fbc4b6e435d-kube-api-access-vjjgq\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149845 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03fd3407-9529-4638-89d6-cfc6b703e510-serving-cert\") pod \"openshift-config-operator-7777fb866f-vzj5b\" (UID: \"03fd3407-9529-4638-89d6-cfc6b703e510\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149860 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/afa5e1ce-a457-4771-ab06-2654a7801704-audit-policies\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149883 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-client-ca\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149898 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwpnc\" (UniqueName: \"kubernetes.io/projected/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-kube-api-access-xwpnc\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149916 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-trusted-ca-bundle\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149949 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149966 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-oauth-config\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.149990 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64bd7009-a06a-43e1-b265-3ea78b5801b9-service-ca-bundle\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.150008 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.150027 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1f6032-b723-4cb3-a93b-73d053eaf822-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2zshh\" (UID: \"9c1f6032-b723-4cb3-a93b-73d053eaf822\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.150042 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-audit\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.150057 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4jdw\" (UniqueName: \"kubernetes.io/projected/03fd3407-9529-4638-89d6-cfc6b703e510-kube-api-access-j4jdw\") pod \"openshift-config-operator-7777fb866f-vzj5b\" (UID: \"03fd3407-9529-4638-89d6-cfc6b703e510\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.150074 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-policies\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.150089 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.150128 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ttxr\" (UniqueName: \"kubernetes.io/projected/61a1581f-5367-4535-99bc-3f28547ab766-kube-api-access-2ttxr\") pod \"dns-operator-744455d44c-xbpcs\" (UID: \"61a1581f-5367-4535-99bc-3f28547ab766\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.150144 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-config\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.150159 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.150175 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/03fd3407-9529-4638-89d6-cfc6b703e510-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vzj5b\" (UID: \"03fd3407-9529-4638-89d6-cfc6b703e510\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.153824 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.153862 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-node-pullsecrets\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.153880 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64bd7009-a06a-43e1-b265-3ea78b5801b9-config\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.153896 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.153911 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.153927 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4qjv\" (UniqueName: \"kubernetes.io/projected/5d015dd8-56c9-4f61-b133-4951cda91ca5-kube-api-access-j4qjv\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.153942 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/afa5e1ce-a457-4771-ab06-2654a7801704-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.153956 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmzs7\" (UniqueName: \"kubernetes.io/projected/72b1bc55-f48b-4d90-ab02-3a80438096b6-kube-api-access-rmzs7\") pod \"downloads-7954f5f757-tcnxt\" (UID: \"72b1bc55-f48b-4d90-ab02-3a80438096b6\") " pod="openshift-console/downloads-7954f5f757-tcnxt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.153972 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f94s2\" (UniqueName: \"kubernetes.io/projected/64bd7009-a06a-43e1-b265-3ea78b5801b9-kube-api-access-f94s2\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.153985 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/796e9631-3388-48b1-8675-3fbc4b6e435d-serving-cert\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.154002 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.154018 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa5e1ce-a457-4771-ab06-2654a7801704-audit-dir\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.154034 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66m62\" (UniqueName: \"kubernetes.io/projected/744aa737-e6c7-4d6b-ba7d-a9479043ad29-kube-api-access-66m62\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.154049 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d015dd8-56c9-4f61-b133-4951cda91ca5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.154064 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-oauth-serving-cert\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.154083 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.154102 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/afa5e1ce-a457-4771-ab06-2654a7801704-encryption-config\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.154558 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b070a40-85a6-42e6-a1bd-d834170a9c9c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8rd9\" (UID: \"6b070a40-85a6-42e6-a1bd-d834170a9c9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.152756 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-p742p"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.153774 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.155401 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j6vfb"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.155770 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.155987 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.156306 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.156331 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.157171 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.157428 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.158239 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.158454 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.158575 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.158676 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.159290 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.159666 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.160799 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.161046 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.161232 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.171990 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xm88w"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.172600 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.173278 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8656"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.180162 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.190109 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.191172 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.209142 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.209402 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.209630 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.209832 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.210185 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.210884 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-k9bv4"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.211438 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.211927 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.211980 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.212519 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.212702 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.212788 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.213091 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.213305 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.213532 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.213873 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.215725 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.215923 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.216659 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.217072 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.217217 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hzqgp"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.217380 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.217860 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.217874 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.217967 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.218167 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.218187 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.218552 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.218657 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.218700 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.219827 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.220200 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.221007 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.221353 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.221790 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.222777 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.223456 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.224344 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zqgj9"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.224798 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.225472 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.226193 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.229201 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.229221 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.230047 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cd5qf"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.230501 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.230568 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.230602 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.230998 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.233594 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.246564 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.247336 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.247567 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.248650 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.249834 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.250712 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.250818 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.251250 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gsl8w"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.251291 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-22qbz"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.251306 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-tnwpn"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.251722 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255245 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnh6\" (UniqueName: \"kubernetes.io/projected/34163385-0c26-4d54-a06a-11f9ef09901d-kube-api-access-qgnh6\") pod \"multus-admission-controller-857f4d67dd-k9bv4\" (UID: \"34163385-0c26-4d54-a06a-11f9ef09901d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255285 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-serving-cert\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255311 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255329 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64bd7009-a06a-43e1-b265-3ea78b5801b9-serving-cert\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255347 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255364 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255379 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d015dd8-56c9-4f61-b133-4951cda91ca5-config\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255393 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-service-ca\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255409 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8m6h\" (UniqueName: \"kubernetes.io/projected/fab52d01-f907-44cb-8d5f-162116d75fc9-kube-api-access-d8m6h\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255428 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a97e310-1811-48a9-a31a-eb9a0321d280-service-ca-bundle\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255443 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-dir\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255471 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-client-ca\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255490 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjjgq\" (UniqueName: \"kubernetes.io/projected/796e9631-3388-48b1-8675-3fbc4b6e435d-kube-api-access-vjjgq\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255507 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03fd3407-9529-4638-89d6-cfc6b703e510-serving-cert\") pod \"openshift-config-operator-7777fb866f-vzj5b\" (UID: \"03fd3407-9529-4638-89d6-cfc6b703e510\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255521 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/afa5e1ce-a457-4771-ab06-2654a7801704-audit-policies\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255537 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-client-ca\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255552 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwpnc\" (UniqueName: \"kubernetes.io/projected/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-kube-api-access-xwpnc\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255567 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-trusted-ca-bundle\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255584 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255600 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-oauth-config\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255615 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89928475-c3fb-415f-a244-6292dc8adc33-config\") pod \"kube-controller-manager-operator-78b949d7b-gpjvh\" (UID: \"89928475-c3fb-415f-a244-6292dc8adc33\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255640 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64bd7009-a06a-43e1-b265-3ea78b5801b9-service-ca-bundle\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255656 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255672 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89928475-c3fb-415f-a244-6292dc8adc33-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gpjvh\" (UID: \"89928475-c3fb-415f-a244-6292dc8adc33\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255691 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1f6032-b723-4cb3-a93b-73d053eaf822-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2zshh\" (UID: \"9c1f6032-b723-4cb3-a93b-73d053eaf822\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255707 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-audit\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255723 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4jdw\" (UniqueName: \"kubernetes.io/projected/03fd3407-9529-4638-89d6-cfc6b703e510-kube-api-access-j4jdw\") pod \"openshift-config-operator-7777fb866f-vzj5b\" (UID: \"03fd3407-9529-4638-89d6-cfc6b703e510\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255734 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qsj62"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255765 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255777 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j6vfb"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255790 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255802 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hnsh7"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255812 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xbpcs"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255741 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/34163385-0c26-4d54-a06a-11f9ef09901d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-k9bv4\" (UID: \"34163385-0c26-4d54-a06a-11f9ef09901d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255874 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fab52d01-f907-44cb-8d5f-162116d75fc9-etcd-ca\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255911 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-policies\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255935 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255967 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ttxr\" (UniqueName: \"kubernetes.io/projected/61a1581f-5367-4535-99bc-3f28547ab766-kube-api-access-2ttxr\") pod \"dns-operator-744455d44c-xbpcs\" (UID: \"61a1581f-5367-4535-99bc-3f28547ab766\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255983 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mxpf\" (UniqueName: \"kubernetes.io/projected/460579d9-ed16-49b7-a588-ef20ceb9bbf4-kube-api-access-2mxpf\") pod \"cluster-samples-operator-665b6dd947-9tm8w\" (UID: \"460579d9-ed16-49b7-a588-ef20ceb9bbf4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.255999 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wks59\" (UniqueName: \"kubernetes.io/projected/4a97e310-1811-48a9-a31a-eb9a0321d280-kube-api-access-wks59\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256017 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-config\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256055 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256081 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/03fd3407-9529-4638-89d6-cfc6b703e510-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vzj5b\" (UID: \"03fd3407-9529-4638-89d6-cfc6b703e510\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256103 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgwjg\" (UniqueName: \"kubernetes.io/projected/bad776f4-e24b-41f1-88d8-2b1fe6258783-kube-api-access-tgwjg\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9wj4\" (UID: \"bad776f4-e24b-41f1-88d8-2b1fe6258783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256155 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256185 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tkln\" (UniqueName: \"kubernetes.io/projected/36efccb8-7513-43d0-8952-d7ad9546da8e-kube-api-access-2tkln\") pod \"machine-config-controller-84d6567774-tpntx\" (UID: \"36efccb8-7513-43d0-8952-d7ad9546da8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256206 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89928475-c3fb-415f-a244-6292dc8adc33-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gpjvh\" (UID: \"89928475-c3fb-415f-a244-6292dc8adc33\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256231 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qjv\" (UniqueName: \"kubernetes.io/projected/5d015dd8-56c9-4f61-b133-4951cda91ca5-kube-api-access-j4qjv\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256253 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/afa5e1ce-a457-4771-ab06-2654a7801704-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256279 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-node-pullsecrets\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256305 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64bd7009-a06a-43e1-b265-3ea78b5801b9-config\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256332 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256362 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256387 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmzs7\" (UniqueName: \"kubernetes.io/projected/72b1bc55-f48b-4d90-ab02-3a80438096b6-kube-api-access-rmzs7\") pod \"downloads-7954f5f757-tcnxt\" (UID: \"72b1bc55-f48b-4d90-ab02-3a80438096b6\") " pod="openshift-console/downloads-7954f5f757-tcnxt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256412 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/36efccb8-7513-43d0-8952-d7ad9546da8e-proxy-tls\") pod \"machine-config-controller-84d6567774-tpntx\" (UID: \"36efccb8-7513-43d0-8952-d7ad9546da8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256441 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f94s2\" (UniqueName: \"kubernetes.io/projected/64bd7009-a06a-43e1-b265-3ea78b5801b9-kube-api-access-f94s2\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256469 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/796e9631-3388-48b1-8675-3fbc4b6e435d-serving-cert\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256502 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256530 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa5e1ce-a457-4771-ab06-2654a7801704-audit-dir\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256555 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66m62\" (UniqueName: \"kubernetes.io/projected/744aa737-e6c7-4d6b-ba7d-a9479043ad29-kube-api-access-66m62\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256580 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d015dd8-56c9-4f61-b133-4951cda91ca5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256602 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-oauth-serving-cert\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256629 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256656 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/afa5e1ce-a457-4771-ab06-2654a7801704-encryption-config\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256682 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b070a40-85a6-42e6-a1bd-d834170a9c9c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8rd9\" (UID: \"6b070a40-85a6-42e6-a1bd-d834170a9c9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256704 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b070a40-85a6-42e6-a1bd-d834170a9c9c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8rd9\" (UID: \"6b070a40-85a6-42e6-a1bd-d834170a9c9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256726 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afa5e1ce-a457-4771-ab06-2654a7801704-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256748 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h5hb\" (UniqueName: \"kubernetes.io/projected/afa5e1ce-a457-4771-ab06-2654a7801704-kube-api-access-7h5hb\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256775 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36efccb8-7513-43d0-8952-d7ad9546da8e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tpntx\" (UID: \"36efccb8-7513-43d0-8952-d7ad9546da8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256802 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a97e310-1811-48a9-a31a-eb9a0321d280-metrics-certs\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256829 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64bd7009-a06a-43e1-b265-3ea78b5801b9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256853 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wrjv\" (UniqueName: \"kubernetes.io/projected/e0ecd5cc-b456-4d69-897c-5fd543842440-kube-api-access-6wrjv\") pod \"kube-storage-version-migrator-operator-b67b599dd-gngtb\" (UID: \"e0ecd5cc-b456-4d69-897c-5fd543842440\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256877 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab52d01-f907-44cb-8d5f-162116d75fc9-serving-cert\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256898 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fab52d01-f907-44cb-8d5f-162116d75fc9-etcd-service-ca\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.256921 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7frcg\" (UniqueName: \"kubernetes.io/projected/af5429d7-39d0-4b17-8219-21c8491384ae-kube-api-access-7frcg\") pod \"migrator-59844c95c7-k6wtb\" (UID: \"af5429d7-39d0-4b17-8219-21c8491384ae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257003 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ecd5cc-b456-4d69-897c-5fd543842440-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gngtb\" (UID: \"e0ecd5cc-b456-4d69-897c-5fd543842440\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257030 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvvdh\" (UniqueName: \"kubernetes.io/projected/e6bdcf19-db76-497c-a2fe-a6de38fae724-kube-api-access-wvvdh\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257054 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab52d01-f907-44cb-8d5f-162116d75fc9-config\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257107 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcvd8\" (UniqueName: \"kubernetes.io/projected/9c1f6032-b723-4cb3-a93b-73d053eaf822-kube-api-access-vcvd8\") pod \"openshift-controller-manager-operator-756b6f6bc6-2zshh\" (UID: \"9c1f6032-b723-4cb3-a93b-73d053eaf822\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257149 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-audit-dir\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257175 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7knj\" (UniqueName: \"kubernetes.io/projected/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-kube-api-access-q7knj\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257201 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4a97e310-1811-48a9-a31a-eb9a0321d280-stats-auth\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257241 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afa5e1ce-a457-4771-ab06-2654a7801704-etcd-client\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257267 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-image-import-ca\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257295 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61a1581f-5367-4535-99bc-3f28547ab766-metrics-tls\") pod \"dns-operator-744455d44c-xbpcs\" (UID: \"61a1581f-5367-4535-99bc-3f28547ab766\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257325 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1f6032-b723-4cb3-a93b-73d053eaf822-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2zshh\" (UID: \"9c1f6032-b723-4cb3-a93b-73d053eaf822\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257333 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257352 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/744aa737-e6c7-4d6b-ba7d-a9479043ad29-serving-cert\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257381 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-serving-cert\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257408 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bad776f4-e24b-41f1-88d8-2b1fe6258783-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9wj4\" (UID: \"bad776f4-e24b-41f1-88d8-2b1fe6258783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257435 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5d015dd8-56c9-4f61-b133-4951cda91ca5-images\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257566 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.258190 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-policies\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.258342 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tnwpn" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.258668 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-config\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.258969 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/03fd3407-9529-4638-89d6-cfc6b703e510-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vzj5b\" (UID: \"03fd3407-9529-4638-89d6-cfc6b703e510\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.259287 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64bd7009-a06a-43e1-b265-3ea78b5801b9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.259542 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-audit-dir\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.259658 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-dir\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.260141 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-image-import-ca\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.262042 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.262212 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.262737 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/afa5e1ce-a457-4771-ab06-2654a7801704-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.262781 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-node-pullsecrets\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.263231 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64bd7009-a06a-43e1-b265-3ea78b5801b9-config\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.264002 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-client-ca\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.257454 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fab52d01-f907-44cb-8d5f-162116d75fc9-etcd-client\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.271683 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b070a40-85a6-42e6-a1bd-d834170a9c9c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8rd9\" (UID: \"6b070a40-85a6-42e6-a1bd-d834170a9c9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.271744 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afa5e1ce-a457-4771-ab06-2654a7801704-serving-cert\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.271785 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-config\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.271827 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-config\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.271856 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.272187 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.272943 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b070a40-85a6-42e6-a1bd-d834170a9c9c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8rd9\" (UID: \"6b070a40-85a6-42e6-a1bd-d834170a9c9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.273375 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.273442 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.274141 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.274401 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-client-ca\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.274468 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.275401 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afa5e1ce-a457-4771-ab06-2654a7801704-etcd-client\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.275774 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-serving-cert\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.276384 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d015dd8-56c9-4f61-b133-4951cda91ca5-config\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.277008 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64bd7009-a06a-43e1-b265-3ea78b5801b9-service-ca-bundle\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.277694 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-audit\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.278985 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03fd3407-9529-4638-89d6-cfc6b703e510-serving-cert\") pod \"openshift-config-operator-7777fb866f-vzj5b\" (UID: \"03fd3407-9529-4638-89d6-cfc6b703e510\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.279098 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1f6032-b723-4cb3-a93b-73d053eaf822-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2zshh\" (UID: \"9c1f6032-b723-4cb3-a93b-73d053eaf822\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.279851 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-trusted-ca-bundle\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.279989 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-oauth-serving-cert\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.281371 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-config\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.282078 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-config\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.282452 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.283559 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.283844 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b070a40-85a6-42e6-a1bd-d834170a9c9c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8rd9\" (UID: \"6b070a40-85a6-42e6-a1bd-d834170a9c9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.284855 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d015dd8-56c9-4f61-b133-4951cda91ca5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.285577 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5d015dd8-56c9-4f61-b133-4951cda91ca5-images\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.287790 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa5e1ce-a457-4771-ab06-2654a7801704-audit-dir\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.289222 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/afa5e1ce-a457-4771-ab06-2654a7801704-encryption-config\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.289696 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pw64v"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.294317 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.294346 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.271897 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ecd5cc-b456-4d69-897c-5fd543842440-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gngtb\" (UID: \"e0ecd5cc-b456-4d69-897c-5fd543842440\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.294410 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-config\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.294446 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/460579d9-ed16-49b7-a588-ef20ceb9bbf4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9tm8w\" (UID: \"460579d9-ed16-49b7-a588-ef20ceb9bbf4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.294479 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-encryption-config\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.294511 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4a97e310-1811-48a9-a31a-eb9a0321d280-default-certificate\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.294549 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-etcd-client\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.294576 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-etcd-serving-ca\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.294877 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64bd7009-a06a-43e1-b265-3ea78b5801b9-serving-cert\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.293845 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.291586 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1f6032-b723-4cb3-a93b-73d053eaf822-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2zshh\" (UID: \"9c1f6032-b723-4cb3-a93b-73d053eaf822\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.292029 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-serving-cert\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.292645 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/796e9631-3388-48b1-8675-3fbc4b6e435d-serving-cert\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.295337 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-etcd-serving-ca\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.294250 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-oauth-config\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.295728 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-config\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.296564 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/744aa737-e6c7-4d6b-ba7d-a9479043ad29-serving-cert\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.296869 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.296874 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afa5e1ce-a457-4771-ab06-2654a7801704-serving-cert\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.297278 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.297556 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afa5e1ce-a457-4771-ab06-2654a7801704-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.298035 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-service-ca\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.275158 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/afa5e1ce-a457-4771-ab06-2654a7801704-audit-policies\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.300103 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61a1581f-5367-4535-99bc-3f28547ab766-metrics-tls\") pod \"dns-operator-744455d44c-xbpcs\" (UID: \"61a1581f-5367-4535-99bc-3f28547ab766\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.300190 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.301088 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.301762 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-etcd-client\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.303476 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.304734 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-encryption-config\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.305421 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.307804 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.309135 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.310583 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.320156 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-k9bv4"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.327303 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qzphl"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.328541 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.336779 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.336989 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rlx7c"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.337960 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rlx7c" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.341191 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.341255 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.344566 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.344601 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.344771 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.347804 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.347960 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tcnxt"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.347986 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.352532 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-f8s5j"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.352592 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.352637 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hzqgp"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.356644 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8656"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.356703 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cd5qf"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.359199 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tnwpn"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.359291 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qzphl"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.360379 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-r2phw"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.361235 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-r2phw" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.361839 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.363053 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.364138 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.365347 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zqgj9"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.366885 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.368257 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.369927 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-r2phw"] Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.375182 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.388203 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395336 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89928475-c3fb-415f-a244-6292dc8adc33-config\") pod \"kube-controller-manager-operator-78b949d7b-gpjvh\" (UID: \"89928475-c3fb-415f-a244-6292dc8adc33\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395421 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89928475-c3fb-415f-a244-6292dc8adc33-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gpjvh\" (UID: \"89928475-c3fb-415f-a244-6292dc8adc33\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395454 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/34163385-0c26-4d54-a06a-11f9ef09901d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-k9bv4\" (UID: \"34163385-0c26-4d54-a06a-11f9ef09901d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395470 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fab52d01-f907-44cb-8d5f-162116d75fc9-etcd-ca\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395513 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mxpf\" (UniqueName: \"kubernetes.io/projected/460579d9-ed16-49b7-a588-ef20ceb9bbf4-kube-api-access-2mxpf\") pod \"cluster-samples-operator-665b6dd947-9tm8w\" (UID: \"460579d9-ed16-49b7-a588-ef20ceb9bbf4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395541 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wks59\" (UniqueName: \"kubernetes.io/projected/4a97e310-1811-48a9-a31a-eb9a0321d280-kube-api-access-wks59\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395568 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgwjg\" (UniqueName: \"kubernetes.io/projected/bad776f4-e24b-41f1-88d8-2b1fe6258783-kube-api-access-tgwjg\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9wj4\" (UID: \"bad776f4-e24b-41f1-88d8-2b1fe6258783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395596 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tkln\" (UniqueName: \"kubernetes.io/projected/36efccb8-7513-43d0-8952-d7ad9546da8e-kube-api-access-2tkln\") pod \"machine-config-controller-84d6567774-tpntx\" (UID: \"36efccb8-7513-43d0-8952-d7ad9546da8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395620 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89928475-c3fb-415f-a244-6292dc8adc33-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gpjvh\" (UID: \"89928475-c3fb-415f-a244-6292dc8adc33\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395694 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/36efccb8-7513-43d0-8952-d7ad9546da8e-proxy-tls\") pod \"machine-config-controller-84d6567774-tpntx\" (UID: \"36efccb8-7513-43d0-8952-d7ad9546da8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395911 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a97e310-1811-48a9-a31a-eb9a0321d280-metrics-certs\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.395951 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wrjv\" (UniqueName: \"kubernetes.io/projected/e0ecd5cc-b456-4d69-897c-5fd543842440-kube-api-access-6wrjv\") pod \"kube-storage-version-migrator-operator-b67b599dd-gngtb\" (UID: \"e0ecd5cc-b456-4d69-897c-5fd543842440\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.396082 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab52d01-f907-44cb-8d5f-162116d75fc9-serving-cert\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.396598 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36efccb8-7513-43d0-8952-d7ad9546da8e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tpntx\" (UID: \"36efccb8-7513-43d0-8952-d7ad9546da8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.396640 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7frcg\" (UniqueName: \"kubernetes.io/projected/af5429d7-39d0-4b17-8219-21c8491384ae-kube-api-access-7frcg\") pod \"migrator-59844c95c7-k6wtb\" (UID: \"af5429d7-39d0-4b17-8219-21c8491384ae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.396681 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ecd5cc-b456-4d69-897c-5fd543842440-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gngtb\" (UID: \"e0ecd5cc-b456-4d69-897c-5fd543842440\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.396709 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fab52d01-f907-44cb-8d5f-162116d75fc9-etcd-service-ca\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.396782 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab52d01-f907-44cb-8d5f-162116d75fc9-config\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.396823 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fab52d01-f907-44cb-8d5f-162116d75fc9-etcd-ca\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.396884 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4a97e310-1811-48a9-a31a-eb9a0321d280-stats-auth\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.397099 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bad776f4-e24b-41f1-88d8-2b1fe6258783-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9wj4\" (UID: \"bad776f4-e24b-41f1-88d8-2b1fe6258783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.397582 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab52d01-f907-44cb-8d5f-162116d75fc9-config\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.398628 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fab52d01-f907-44cb-8d5f-162116d75fc9-etcd-client\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.397715 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36efccb8-7513-43d0-8952-d7ad9546da8e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tpntx\" (UID: \"36efccb8-7513-43d0-8952-d7ad9546da8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.397752 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fab52d01-f907-44cb-8d5f-162116d75fc9-etcd-service-ca\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.398691 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ecd5cc-b456-4d69-897c-5fd543842440-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gngtb\" (UID: \"e0ecd5cc-b456-4d69-897c-5fd543842440\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.398750 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/460579d9-ed16-49b7-a588-ef20ceb9bbf4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9tm8w\" (UID: \"460579d9-ed16-49b7-a588-ef20ceb9bbf4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.398795 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4a97e310-1811-48a9-a31a-eb9a0321d280-default-certificate\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.398848 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgnh6\" (UniqueName: \"kubernetes.io/projected/34163385-0c26-4d54-a06a-11f9ef09901d-kube-api-access-qgnh6\") pod \"multus-admission-controller-857f4d67dd-k9bv4\" (UID: \"34163385-0c26-4d54-a06a-11f9ef09901d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.399070 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8m6h\" (UniqueName: \"kubernetes.io/projected/fab52d01-f907-44cb-8d5f-162116d75fc9-kube-api-access-d8m6h\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.399104 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a97e310-1811-48a9-a31a-eb9a0321d280-service-ca-bundle\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.400499 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab52d01-f907-44cb-8d5f-162116d75fc9-serving-cert\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.405435 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fab52d01-f907-44cb-8d5f-162116d75fc9-etcd-client\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.408698 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.428336 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.448192 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.467941 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.488579 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.508058 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.510072 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a97e310-1811-48a9-a31a-eb9a0321d280-service-ca-bundle\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.528365 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.548550 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.569534 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.583004 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4a97e310-1811-48a9-a31a-eb9a0321d280-default-certificate\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.588484 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.602255 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4a97e310-1811-48a9-a31a-eb9a0321d280-stats-auth\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.608611 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.620446 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a97e310-1811-48a9-a31a-eb9a0321d280-metrics-certs\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.628690 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.638695 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89928475-c3fb-415f-a244-6292dc8adc33-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gpjvh\" (UID: \"89928475-c3fb-415f-a244-6292dc8adc33\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.648221 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.668009 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.676153 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89928475-c3fb-415f-a244-6292dc8adc33-config\") pod \"kube-controller-manager-operator-78b949d7b-gpjvh\" (UID: \"89928475-c3fb-415f-a244-6292dc8adc33\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.688012 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.700282 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bad776f4-e24b-41f1-88d8-2b1fe6258783-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9wj4\" (UID: \"bad776f4-e24b-41f1-88d8-2b1fe6258783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.708360 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.728886 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.747674 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.758556 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ecd5cc-b456-4d69-897c-5fd543842440-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gngtb\" (UID: \"e0ecd5cc-b456-4d69-897c-5fd543842440\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.769046 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.788650 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.808786 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.812809 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ecd5cc-b456-4d69-897c-5fd543842440-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gngtb\" (UID: \"e0ecd5cc-b456-4d69-897c-5fd543842440\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.828069 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.849002 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.869300 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.887985 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.908788 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.919947 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/34163385-0c26-4d54-a06a-11f9ef09901d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-k9bv4\" (UID: \"34163385-0c26-4d54-a06a-11f9ef09901d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.928360 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.947729 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.976218 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 11:13:55 crc kubenswrapper[4699]: I0226 11:13:55.988537 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.008094 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.028546 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.048302 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.068420 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.079577 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/36efccb8-7513-43d0-8952-d7ad9546da8e-proxy-tls\") pod \"machine-config-controller-84d6567774-tpntx\" (UID: \"36efccb8-7513-43d0-8952-d7ad9546da8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.088468 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.108343 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.127935 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.131739 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/460579d9-ed16-49b7-a588-ef20ceb9bbf4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9tm8w\" (UID: \"460579d9-ed16-49b7-a588-ef20ceb9bbf4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.148871 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.188387 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.208581 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.227002 4699 request.go:700] Waited for 1.003087806s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.229669 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.248221 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.269492 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.288505 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.309325 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.329642 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.348509 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.368220 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.389308 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.409177 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.429040 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.448263 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.468394 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.488599 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.507518 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.528053 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.547974 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.567743 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.588318 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.615632 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.627942 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.647928 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.668884 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.688309 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.708922 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.728546 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.747678 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.768298 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.788018 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.808531 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.827585 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 11:13:56 crc kubenswrapper[4699]: I0226 11:13:56.868670 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.008068 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.027872 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.188420 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.209988 4699 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.228501 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.246825 4699 request.go:700] Waited for 1.908555411s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.248666 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.268170 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.288216 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.308762 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.328297 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.347847 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.368563 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.402960 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mxpf\" (UniqueName: \"kubernetes.io/projected/460579d9-ed16-49b7-a588-ef20ceb9bbf4-kube-api-access-2mxpf\") pod \"cluster-samples-operator-665b6dd947-9tm8w\" (UID: \"460579d9-ed16-49b7-a588-ef20ceb9bbf4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.442739 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tkln\" (UniqueName: \"kubernetes.io/projected/36efccb8-7513-43d0-8952-d7ad9546da8e-kube-api-access-2tkln\") pod \"machine-config-controller-84d6567774-tpntx\" (UID: \"36efccb8-7513-43d0-8952-d7ad9546da8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.543688 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgnh6\" (UniqueName: \"kubernetes.io/projected/34163385-0c26-4d54-a06a-11f9ef09901d-kube-api-access-qgnh6\") pod \"multus-admission-controller-857f4d67dd-k9bv4\" (UID: \"34163385-0c26-4d54-a06a-11f9ef09901d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.588237 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.607810 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.615684 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b070a40-85a6-42e6-a1bd-d834170a9c9c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f8rd9\" (UID: \"6b070a40-85a6-42e6-a1bd-d834170a9c9c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.624441 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.628645 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.631008 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpx8q\" (UniqueName: \"kubernetes.io/projected/09191eec-0be2-4c45-9249-6c8081d6108a-kube-api-access-lpx8q\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.631379 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd7cbed-d0bf-4d8c-933c-4d031170288a-config\") pod \"kube-apiserver-operator-766d6c64bb-vmxjr\" (UID: \"8cd7cbed-d0bf-4d8c-933c-4d031170288a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.631493 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7d5fe0-885a-44e4-bacf-19bceeea178f-config\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.631674 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-tls\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.631929 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trrf7\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-kube-api-access-trrf7\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632000 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd7cbed-d0bf-4d8c-933c-4d031170288a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vmxjr\" (UID: \"8cd7cbed-d0bf-4d8c-933c-4d031170288a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632063 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-bound-sa-token\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632099 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/727302ed-b5c0-49b7-be17-7da9387c16c3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632171 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0c7d5fe0-885a-44e4-bacf-19bceeea178f-trusted-ca\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632246 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/727302ed-b5c0-49b7-be17-7da9387c16c3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632339 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/727302ed-b5c0-49b7-be17-7da9387c16c3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632384 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09191eec-0be2-4c45-9249-6c8081d6108a-metrics-tls\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632478 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-certificates\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632509 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7232eb23-31ae-4e72-ae27-c256dc4cac9a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632531 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09191eec-0be2-4c45-9249-6c8081d6108a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632639 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cd7cbed-d0bf-4d8c-933c-4d031170288a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vmxjr\" (UID: \"8cd7cbed-d0bf-4d8c-933c-4d031170288a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632684 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7chv\" (UniqueName: \"kubernetes.io/projected/679ffaa0-41b8-4638-8b4c-4c1f424812e4-kube-api-access-t7chv\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632777 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7d5fe0-885a-44e4-bacf-19bceeea178f-serving-cert\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632811 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-trusted-ca\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.632839 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679ffaa0-41b8-4638-8b4c-4c1f424812e4-config\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.633018 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb2g6\" (UniqueName: \"kubernetes.io/projected/727302ed-b5c0-49b7-be17-7da9387c16c3-kube-api-access-sb2g6\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.633058 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/679ffaa0-41b8-4638-8b4c-4c1f424812e4-machine-approver-tls\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.633172 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/679ffaa0-41b8-4638-8b4c-4c1f424812e4-auth-proxy-config\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.633220 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcndj\" (UniqueName: \"kubernetes.io/projected/0c7d5fe0-885a-44e4-bacf-19bceeea178f-kube-api-access-rcndj\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.633285 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.633321 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09191eec-0be2-4c45-9249-6c8081d6108a-trusted-ca\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.633410 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7232eb23-31ae-4e72-ae27-c256dc4cac9a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.633684 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" Feb 26 11:13:57 crc kubenswrapper[4699]: E0226 11:13:57.633910 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.133895492 +0000 UTC m=+183.944721926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.648937 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.668976 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.688622 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.708175 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.728843 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.734877 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:57 crc kubenswrapper[4699]: E0226 11:13:57.735048 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.235026818 +0000 UTC m=+184.045853252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.735336 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/727302ed-b5c0-49b7-be17-7da9387c16c3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.735492 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09191eec-0be2-4c45-9249-6c8081d6108a-metrics-tls\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.735578 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89f840f7-d21f-4028-b53d-ed0e2061ff15-config-volume\") pod \"dns-default-tnwpn\" (UID: \"89f840f7-d21f-4028-b53d-ed0e2061ff15\") " pod="openshift-dns/dns-default-tnwpn" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.735674 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44832f39-2c56-4669-b328-7e663f6cacdf-webhook-cert\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.735873 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-certificates\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.735912 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f8a28b8-c47b-4288-877f-8e90a3b581b5-config-volume\") pod \"collect-profiles-29535060-f97rd\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.735938 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckkpm\" (UniqueName: \"kubernetes.io/projected/6b9ab605-cf5d-43ea-9554-20032a52e23c-kube-api-access-ckkpm\") pod \"machine-config-server-rlx7c\" (UID: \"6b9ab605-cf5d-43ea-9554-20032a52e23c\") " pod="openshift-machine-config-operator/machine-config-server-rlx7c" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.735986 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-proxy-tls\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736013 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-plugins-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736036 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00fcad37-801c-4a2c-8599-dabd0f36db6d-cert\") pod \"ingress-canary-r2phw\" (UID: \"00fcad37-801c-4a2c-8599-dabd0f36db6d\") " pod="openshift-ingress-canary/ingress-canary-r2phw" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736061 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7232eb23-31ae-4e72-ae27-c256dc4cac9a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736085 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09191eec-0be2-4c45-9249-6c8081d6108a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736106 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cd7cbed-d0bf-4d8c-933c-4d031170288a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vmxjr\" (UID: \"8cd7cbed-d0bf-4d8c-933c-4d031170288a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736163 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7chv\" (UniqueName: \"kubernetes.io/projected/679ffaa0-41b8-4638-8b4c-4c1f424812e4-kube-api-access-t7chv\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736189 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7d5fe0-885a-44e4-bacf-19bceeea178f-serving-cert\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736240 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679ffaa0-41b8-4638-8b4c-4c1f424812e4-config\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736269 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cd5qf\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736293 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c57mh\" (UniqueName: \"kubernetes.io/projected/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-kube-api-access-c57mh\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736318 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-trusted-ca\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736362 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23bae79f-03c7-4710-ac97-25da2c7988c4-signing-key\") pod \"service-ca-9c57cc56f-zqgj9\" (UID: \"23bae79f-03c7-4710-ac97-25da2c7988c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736396 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23bae79f-03c7-4710-ac97-25da2c7988c4-signing-cabundle\") pod \"service-ca-9c57cc56f-zqgj9\" (UID: \"23bae79f-03c7-4710-ac97-25da2c7988c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736437 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1a9875bc-9f2e-4887-8dc6-a00cc789eb4a-srv-cert\") pod \"catalog-operator-68c6474976-xvgnb\" (UID: \"1a9875bc-9f2e-4887-8dc6-a00cc789eb4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736572 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b9ab605-cf5d-43ea-9554-20032a52e23c-node-bootstrap-token\") pod \"machine-config-server-rlx7c\" (UID: \"6b9ab605-cf5d-43ea-9554-20032a52e23c\") " pod="openshift-machine-config-operator/machine-config-server-rlx7c" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736669 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc38281-b1a4-4c40-a707-a106b651c107-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tg675\" (UID: \"fbc38281-b1a4-4c40-a707-a106b651c107\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736725 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5f6e45f7-93da-46b8-9021-d2500076c385-srv-cert\") pod \"olm-operator-6b444d44fb-czs8l\" (UID: \"5f6e45f7-93da-46b8-9021-d2500076c385\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.736835 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89f840f7-d21f-4028-b53d-ed0e2061ff15-metrics-tls\") pod \"dns-default-tnwpn\" (UID: \"89f840f7-d21f-4028-b53d-ed0e2061ff15\") " pod="openshift-dns/dns-default-tnwpn" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737156 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb2g6\" (UniqueName: \"kubernetes.io/projected/727302ed-b5c0-49b7-be17-7da9387c16c3-kube-api-access-sb2g6\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737260 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/679ffaa0-41b8-4638-8b4c-4c1f424812e4-machine-approver-tls\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737296 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qrl\" (UniqueName: \"kubernetes.io/projected/1a9875bc-9f2e-4887-8dc6-a00cc789eb4a-kube-api-access-l9qrl\") pod \"catalog-operator-68c6474976-xvgnb\" (UID: \"1a9875bc-9f2e-4887-8dc6-a00cc789eb4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737372 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/679ffaa0-41b8-4638-8b4c-4c1f424812e4-auth-proxy-config\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737397 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aac34b6-aad8-4b68-8180-f68af008611d-config\") pod \"service-ca-operator-777779d784-ctfcc\" (UID: \"0aac34b6-aad8-4b68-8180-f68af008611d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737424 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcndj\" (UniqueName: \"kubernetes.io/projected/0c7d5fe0-885a-44e4-bacf-19bceeea178f-kube-api-access-rcndj\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737453 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737475 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09191eec-0be2-4c45-9249-6c8081d6108a-trusted-ca\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737522 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlm8k\" (UniqueName: \"kubernetes.io/projected/0aac34b6-aad8-4b68-8180-f68af008611d-kube-api-access-dlm8k\") pod \"service-ca-operator-777779d784-ctfcc\" (UID: \"0aac34b6-aad8-4b68-8180-f68af008611d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737542 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-registration-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737572 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44832f39-2c56-4669-b328-7e663f6cacdf-apiservice-cert\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737591 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhkh7\" (UniqueName: \"kubernetes.io/projected/00fcad37-801c-4a2c-8599-dabd0f36db6d-kube-api-access-jhkh7\") pod \"ingress-canary-r2phw\" (UID: \"00fcad37-801c-4a2c-8599-dabd0f36db6d\") " pod="openshift-ingress-canary/ingress-canary-r2phw" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737636 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnjhv\" (UniqueName: \"kubernetes.io/projected/5f8a28b8-c47b-4288-877f-8e90a3b581b5-kube-api-access-bnjhv\") pod \"collect-profiles-29535060-f97rd\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737660 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7232eb23-31ae-4e72-ae27-c256dc4cac9a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: E0226 11:13:57.737811 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.2377984 +0000 UTC m=+184.048625024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737834 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc38281-b1a4-4c40-a707-a106b651c107-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tg675\" (UID: \"fbc38281-b1a4-4c40-a707-a106b651c107\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737947 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpx8q\" (UniqueName: \"kubernetes.io/projected/09191eec-0be2-4c45-9249-6c8081d6108a-kube-api-access-lpx8q\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.737974 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-csi-data-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738078 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhksj\" (UniqueName: \"kubernetes.io/projected/89f840f7-d21f-4028-b53d-ed0e2061ff15-kube-api-access-nhksj\") pod \"dns-default-tnwpn\" (UID: \"89f840f7-d21f-4028-b53d-ed0e2061ff15\") " pod="openshift-dns/dns-default-tnwpn" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738108 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd7cbed-d0bf-4d8c-933c-4d031170288a-config\") pod \"kube-apiserver-operator-766d6c64bb-vmxjr\" (UID: \"8cd7cbed-d0bf-4d8c-933c-4d031170288a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738137 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2p88\" (UniqueName: \"kubernetes.io/projected/23bae79f-03c7-4710-ac97-25da2c7988c4-kube-api-access-p2p88\") pod \"service-ca-9c57cc56f-zqgj9\" (UID: \"23bae79f-03c7-4710-ac97-25da2c7988c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738192 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7d5fe0-885a-44e4-bacf-19bceeea178f-config\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738209 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aac34b6-aad8-4b68-8180-f68af008611d-serving-cert\") pod \"service-ca-operator-777779d784-ctfcc\" (UID: \"0aac34b6-aad8-4b68-8180-f68af008611d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738228 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b9ab605-cf5d-43ea-9554-20032a52e23c-certs\") pod \"machine-config-server-rlx7c\" (UID: \"6b9ab605-cf5d-43ea-9554-20032a52e23c\") " pod="openshift-machine-config-operator/machine-config-server-rlx7c" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738265 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-tls\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738285 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwq64\" (UniqueName: \"kubernetes.io/projected/5cc10041-704b-4b00-8e4e-369103434b64-kube-api-access-bwq64\") pod \"marketplace-operator-79b997595-cd5qf\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738302 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-mountpoint-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738318 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/44832f39-2c56-4669-b328-7e663f6cacdf-tmpfs\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738351 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4ml4\" (UniqueName: \"kubernetes.io/projected/79a9064f-5fcf-42f7-af6f-71aeeb75560e-kube-api-access-l4ml4\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738369 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8mlc\" (UniqueName: \"kubernetes.io/projected/1d3e449f-d082-43cb-951d-53d82fde40ca-kube-api-access-t8mlc\") pod \"package-server-manager-789f6589d5-dk749\" (UID: \"1d3e449f-d082-43cb-951d-53d82fde40ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738402 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6jtx\" (UniqueName: \"kubernetes.io/projected/5f6e45f7-93da-46b8-9021-d2500076c385-kube-api-access-r6jtx\") pod \"olm-operator-6b444d44fb-czs8l\" (UID: \"5f6e45f7-93da-46b8-9021-d2500076c385\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738419 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trrf7\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-kube-api-access-trrf7\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738435 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5f6e45f7-93da-46b8-9021-d2500076c385-profile-collector-cert\") pod \"olm-operator-6b444d44fb-czs8l\" (UID: \"5f6e45f7-93da-46b8-9021-d2500076c385\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738488 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd7cbed-d0bf-4d8c-933c-4d031170288a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vmxjr\" (UID: \"8cd7cbed-d0bf-4d8c-933c-4d031170288a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738521 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-images\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738544 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738571 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k275m\" (UniqueName: \"kubernetes.io/projected/fbc38281-b1a4-4c40-a707-a106b651c107-kube-api-access-k275m\") pod \"openshift-apiserver-operator-796bbdcf4f-tg675\" (UID: \"fbc38281-b1a4-4c40-a707-a106b651c107\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738632 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-bound-sa-token\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738657 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/727302ed-b5c0-49b7-be17-7da9387c16c3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738680 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f8a28b8-c47b-4288-877f-8e90a3b581b5-secret-volume\") pod \"collect-profiles-29535060-f97rd\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738716 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqcrg\" (UniqueName: \"kubernetes.io/projected/44832f39-2c56-4669-b328-7e663f6cacdf-kube-api-access-jqcrg\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738754 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1a9875bc-9f2e-4887-8dc6-a00cc789eb4a-profile-collector-cert\") pod \"catalog-operator-68c6474976-xvgnb\" (UID: \"1a9875bc-9f2e-4887-8dc6-a00cc789eb4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738778 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0c7d5fe0-885a-44e4-bacf-19bceeea178f-trusted-ca\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738850 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/727302ed-b5c0-49b7-be17-7da9387c16c3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738875 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-socket-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738925 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cd5qf\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.738963 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3e449f-d082-43cb-951d-53d82fde40ca-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dk749\" (UID: \"1d3e449f-d082-43cb-951d-53d82fde40ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.742672 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7232eb23-31ae-4e72-ae27-c256dc4cac9a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.743249 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-certificates\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.748083 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.772786 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.788767 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.813701 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.841401 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:57 crc kubenswrapper[4699]: E0226 11:13:57.841479 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.341462881 +0000 UTC m=+184.152289315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.841628 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/44832f39-2c56-4669-b328-7e663f6cacdf-tmpfs\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.841663 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6jtx\" (UniqueName: \"kubernetes.io/projected/5f6e45f7-93da-46b8-9021-d2500076c385-kube-api-access-r6jtx\") pod \"olm-operator-6b444d44fb-czs8l\" (UID: \"5f6e45f7-93da-46b8-9021-d2500076c385\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.841688 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4ml4\" (UniqueName: \"kubernetes.io/projected/79a9064f-5fcf-42f7-af6f-71aeeb75560e-kube-api-access-l4ml4\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.841714 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8mlc\" (UniqueName: \"kubernetes.io/projected/1d3e449f-d082-43cb-951d-53d82fde40ca-kube-api-access-t8mlc\") pod \"package-server-manager-789f6589d5-dk749\" (UID: \"1d3e449f-d082-43cb-951d-53d82fde40ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.841751 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5f6e45f7-93da-46b8-9021-d2500076c385-profile-collector-cert\") pod \"olm-operator-6b444d44fb-czs8l\" (UID: \"5f6e45f7-93da-46b8-9021-d2500076c385\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.841833 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-images\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.841857 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.841885 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k275m\" (UniqueName: \"kubernetes.io/projected/fbc38281-b1a4-4c40-a707-a106b651c107-kube-api-access-k275m\") pod \"openshift-apiserver-operator-796bbdcf4f-tg675\" (UID: \"fbc38281-b1a4-4c40-a707-a106b651c107\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.841927 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f8a28b8-c47b-4288-877f-8e90a3b581b5-secret-volume\") pod \"collect-profiles-29535060-f97rd\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.842257 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/44832f39-2c56-4669-b328-7e663f6cacdf-tmpfs\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.842842 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.843425 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-images\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.843605 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqcrg\" (UniqueName: \"kubernetes.io/projected/44832f39-2c56-4669-b328-7e663f6cacdf-kube-api-access-jqcrg\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.843645 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1a9875bc-9f2e-4887-8dc6-a00cc789eb4a-profile-collector-cert\") pod \"catalog-operator-68c6474976-xvgnb\" (UID: \"1a9875bc-9f2e-4887-8dc6-a00cc789eb4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.843690 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-socket-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.843727 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cd5qf\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.843781 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3e449f-d082-43cb-951d-53d82fde40ca-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dk749\" (UID: \"1d3e449f-d082-43cb-951d-53d82fde40ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.843863 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89f840f7-d21f-4028-b53d-ed0e2061ff15-config-volume\") pod \"dns-default-tnwpn\" (UID: \"89f840f7-d21f-4028-b53d-ed0e2061ff15\") " pod="openshift-dns/dns-default-tnwpn" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.843892 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44832f39-2c56-4669-b328-7e663f6cacdf-webhook-cert\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.843939 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f8a28b8-c47b-4288-877f-8e90a3b581b5-config-volume\") pod \"collect-profiles-29535060-f97rd\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.843964 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckkpm\" (UniqueName: \"kubernetes.io/projected/6b9ab605-cf5d-43ea-9554-20032a52e23c-kube-api-access-ckkpm\") pod \"machine-config-server-rlx7c\" (UID: \"6b9ab605-cf5d-43ea-9554-20032a52e23c\") " pod="openshift-machine-config-operator/machine-config-server-rlx7c" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.843997 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-proxy-tls\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.844037 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-plugins-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.844056 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00fcad37-801c-4a2c-8599-dabd0f36db6d-cert\") pod \"ingress-canary-r2phw\" (UID: \"00fcad37-801c-4a2c-8599-dabd0f36db6d\") " pod="openshift-ingress-canary/ingress-canary-r2phw" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.844826 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89f840f7-d21f-4028-b53d-ed0e2061ff15-config-volume\") pod \"dns-default-tnwpn\" (UID: \"89f840f7-d21f-4028-b53d-ed0e2061ff15\") " pod="openshift-dns/dns-default-tnwpn" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845454 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cd5qf\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845482 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c57mh\" (UniqueName: \"kubernetes.io/projected/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-kube-api-access-c57mh\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845505 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23bae79f-03c7-4710-ac97-25da2c7988c4-signing-key\") pod \"service-ca-9c57cc56f-zqgj9\" (UID: \"23bae79f-03c7-4710-ac97-25da2c7988c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845537 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23bae79f-03c7-4710-ac97-25da2c7988c4-signing-cabundle\") pod \"service-ca-9c57cc56f-zqgj9\" (UID: \"23bae79f-03c7-4710-ac97-25da2c7988c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845558 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1a9875bc-9f2e-4887-8dc6-a00cc789eb4a-srv-cert\") pod \"catalog-operator-68c6474976-xvgnb\" (UID: \"1a9875bc-9f2e-4887-8dc6-a00cc789eb4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845580 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b9ab605-cf5d-43ea-9554-20032a52e23c-node-bootstrap-token\") pod \"machine-config-server-rlx7c\" (UID: \"6b9ab605-cf5d-43ea-9554-20032a52e23c\") " pod="openshift-machine-config-operator/machine-config-server-rlx7c" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845600 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc38281-b1a4-4c40-a707-a106b651c107-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tg675\" (UID: \"fbc38281-b1a4-4c40-a707-a106b651c107\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845627 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5f6e45f7-93da-46b8-9021-d2500076c385-srv-cert\") pod \"olm-operator-6b444d44fb-czs8l\" (UID: \"5f6e45f7-93da-46b8-9021-d2500076c385\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845653 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89f840f7-d21f-4028-b53d-ed0e2061ff15-metrics-tls\") pod \"dns-default-tnwpn\" (UID: \"89f840f7-d21f-4028-b53d-ed0e2061ff15\") " pod="openshift-dns/dns-default-tnwpn" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845724 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9qrl\" (UniqueName: \"kubernetes.io/projected/1a9875bc-9f2e-4887-8dc6-a00cc789eb4a-kube-api-access-l9qrl\") pod \"catalog-operator-68c6474976-xvgnb\" (UID: \"1a9875bc-9f2e-4887-8dc6-a00cc789eb4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845861 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aac34b6-aad8-4b68-8180-f68af008611d-config\") pod \"service-ca-operator-777779d784-ctfcc\" (UID: \"0aac34b6-aad8-4b68-8180-f68af008611d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845912 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845963 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlm8k\" (UniqueName: \"kubernetes.io/projected/0aac34b6-aad8-4b68-8180-f68af008611d-kube-api-access-dlm8k\") pod \"service-ca-operator-777779d784-ctfcc\" (UID: \"0aac34b6-aad8-4b68-8180-f68af008611d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.845987 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-registration-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.846008 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44832f39-2c56-4669-b328-7e663f6cacdf-apiservice-cert\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.846029 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhkh7\" (UniqueName: \"kubernetes.io/projected/00fcad37-801c-4a2c-8599-dabd0f36db6d-kube-api-access-jhkh7\") pod \"ingress-canary-r2phw\" (UID: \"00fcad37-801c-4a2c-8599-dabd0f36db6d\") " pod="openshift-ingress-canary/ingress-canary-r2phw" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.846054 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnjhv\" (UniqueName: \"kubernetes.io/projected/5f8a28b8-c47b-4288-877f-8e90a3b581b5-kube-api-access-bnjhv\") pod \"collect-profiles-29535060-f97rd\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.846088 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc38281-b1a4-4c40-a707-a106b651c107-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tg675\" (UID: \"fbc38281-b1a4-4c40-a707-a106b651c107\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.846151 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-csi-data-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.846215 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhksj\" (UniqueName: \"kubernetes.io/projected/89f840f7-d21f-4028-b53d-ed0e2061ff15-kube-api-access-nhksj\") pod \"dns-default-tnwpn\" (UID: \"89f840f7-d21f-4028-b53d-ed0e2061ff15\") " pod="openshift-dns/dns-default-tnwpn" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.846260 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2p88\" (UniqueName: \"kubernetes.io/projected/23bae79f-03c7-4710-ac97-25da2c7988c4-kube-api-access-p2p88\") pod \"service-ca-9c57cc56f-zqgj9\" (UID: \"23bae79f-03c7-4710-ac97-25da2c7988c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.846294 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aac34b6-aad8-4b68-8180-f68af008611d-serving-cert\") pod \"service-ca-operator-777779d784-ctfcc\" (UID: \"0aac34b6-aad8-4b68-8180-f68af008611d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.846324 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b9ab605-cf5d-43ea-9554-20032a52e23c-certs\") pod \"machine-config-server-rlx7c\" (UID: \"6b9ab605-cf5d-43ea-9554-20032a52e23c\") " pod="openshift-machine-config-operator/machine-config-server-rlx7c" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.846372 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwq64\" (UniqueName: \"kubernetes.io/projected/5cc10041-704b-4b00-8e4e-369103434b64-kube-api-access-bwq64\") pod \"marketplace-operator-79b997595-cd5qf\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.846396 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-mountpoint-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.847159 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cd5qf\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.847442 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-socket-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.850404 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc38281-b1a4-4c40-a707-a106b651c107-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tg675\" (UID: \"fbc38281-b1a4-4c40-a707-a106b651c107\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.850547 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-csi-data-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.851466 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aac34b6-aad8-4b68-8180-f68af008611d-config\") pod \"service-ca-operator-777779d784-ctfcc\" (UID: \"0aac34b6-aad8-4b68-8180-f68af008611d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" Feb 26 11:13:57 crc kubenswrapper[4699]: E0226 11:13:57.851892 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.351874188 +0000 UTC m=+184.162700622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.852613 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.854768 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1a9875bc-9f2e-4887-8dc6-a00cc789eb4a-profile-collector-cert\") pod \"catalog-operator-68c6474976-xvgnb\" (UID: \"1a9875bc-9f2e-4887-8dc6-a00cc789eb4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.855183 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f8a28b8-c47b-4288-877f-8e90a3b581b5-secret-volume\") pod \"collect-profiles-29535060-f97rd\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.855245 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-mountpoint-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.855345 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-registration-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.855814 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b9ab605-cf5d-43ea-9554-20032a52e23c-certs\") pod \"machine-config-server-rlx7c\" (UID: \"6b9ab605-cf5d-43ea-9554-20032a52e23c\") " pod="openshift-machine-config-operator/machine-config-server-rlx7c" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.856375 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc38281-b1a4-4c40-a707-a106b651c107-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tg675\" (UID: \"fbc38281-b1a4-4c40-a707-a106b651c107\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.857421 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23bae79f-03c7-4710-ac97-25da2c7988c4-signing-cabundle\") pod \"service-ca-9c57cc56f-zqgj9\" (UID: \"23bae79f-03c7-4710-ac97-25da2c7988c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.857685 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5f6e45f7-93da-46b8-9021-d2500076c385-profile-collector-cert\") pod \"olm-operator-6b444d44fb-czs8l\" (UID: \"5f6e45f7-93da-46b8-9021-d2500076c385\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.858909 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89f840f7-d21f-4028-b53d-ed0e2061ff15-metrics-tls\") pod \"dns-default-tnwpn\" (UID: \"89f840f7-d21f-4028-b53d-ed0e2061ff15\") " pod="openshift-dns/dns-default-tnwpn" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.858988 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3e449f-d082-43cb-951d-53d82fde40ca-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dk749\" (UID: \"1d3e449f-d082-43cb-951d-53d82fde40ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.859185 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/79a9064f-5fcf-42f7-af6f-71aeeb75560e-plugins-dir\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.859514 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44832f39-2c56-4669-b328-7e663f6cacdf-webhook-cert\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.859596 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44832f39-2c56-4669-b328-7e663f6cacdf-apiservice-cert\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.859665 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cd5qf\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.859824 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f8a28b8-c47b-4288-877f-8e90a3b581b5-config-volume\") pod \"collect-profiles-29535060-f97rd\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.860159 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5f6e45f7-93da-46b8-9021-d2500076c385-srv-cert\") pod \"olm-operator-6b444d44fb-czs8l\" (UID: \"5f6e45f7-93da-46b8-9021-d2500076c385\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.860353 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00fcad37-801c-4a2c-8599-dabd0f36db6d-cert\") pod \"ingress-canary-r2phw\" (UID: \"00fcad37-801c-4a2c-8599-dabd0f36db6d\") " pod="openshift-ingress-canary/ingress-canary-r2phw" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.860857 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23bae79f-03c7-4710-ac97-25da2c7988c4-signing-key\") pod \"service-ca-9c57cc56f-zqgj9\" (UID: \"23bae79f-03c7-4710-ac97-25da2c7988c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.861555 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b9ab605-cf5d-43ea-9554-20032a52e23c-node-bootstrap-token\") pod \"machine-config-server-rlx7c\" (UID: \"6b9ab605-cf5d-43ea-9554-20032a52e23c\") " pod="openshift-machine-config-operator/machine-config-server-rlx7c" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.861754 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aac34b6-aad8-4b68-8180-f68af008611d-serving-cert\") pod \"service-ca-operator-777779d784-ctfcc\" (UID: \"0aac34b6-aad8-4b68-8180-f68af008611d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.862178 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1a9875bc-9f2e-4887-8dc6-a00cc789eb4a-srv-cert\") pod \"catalog-operator-68c6474976-xvgnb\" (UID: \"1a9875bc-9f2e-4887-8dc6-a00cc789eb4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.862689 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-proxy-tls\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.878713 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.879701 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx"] Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.887887 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.891223 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89928475-c3fb-415f-a244-6292dc8adc33-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gpjvh\" (UID: \"89928475-c3fb-415f-a244-6292dc8adc33\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" Feb 26 11:13:57 crc kubenswrapper[4699]: W0226 11:13:57.892256 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36efccb8_7513_43d0_8952_d7ad9546da8e.slice/crio-837f0748289c32dab50176203bf6899f152b0041109c72b2c3ad10609d715f51 WatchSource:0}: Error finding container 837f0748289c32dab50176203bf6899f152b0041109c72b2c3ad10609d715f51: Status 404 returned error can't find the container with id 837f0748289c32dab50176203bf6899f152b0041109c72b2c3ad10609d715f51 Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.907363 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w"] Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.908446 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.928172 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.947059 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:57 crc kubenswrapper[4699]: E0226 11:13:57.947904 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.447885603 +0000 UTC m=+184.258712037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.948264 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.966613 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.978835 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.991736 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ttxr\" (UniqueName: \"kubernetes.io/projected/61a1581f-5367-4535-99bc-3f28547ab766-kube-api-access-2ttxr\") pod \"dns-operator-744455d44c-xbpcs\" (UID: \"61a1581f-5367-4535-99bc-3f28547ab766\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" Feb 26 11:13:57 crc kubenswrapper[4699]: I0226 11:13:57.992186 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.003875 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvvdh\" (UniqueName: \"kubernetes.io/projected/e6bdcf19-db76-497c-a2fe-a6de38fae724-kube-api-access-wvvdh\") pod \"console-f9d7485db-hnsh7\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.007286 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmzs7\" (UniqueName: \"kubernetes.io/projected/72b1bc55-f48b-4d90-ab02-3a80438096b6-kube-api-access-rmzs7\") pod \"downloads-7954f5f757-tcnxt\" (UID: \"72b1bc55-f48b-4d90-ab02-3a80438096b6\") " pod="openshift-console/downloads-7954f5f757-tcnxt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.009794 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.028620 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.035188 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcvd8\" (UniqueName: \"kubernetes.io/projected/9c1f6032-b723-4cb3-a93b-73d053eaf822-kube-api-access-vcvd8\") pod \"openshift-controller-manager-operator-756b6f6bc6-2zshh\" (UID: \"9c1f6032-b723-4cb3-a93b-73d053eaf822\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.042967 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7knj\" (UniqueName: \"kubernetes.io/projected/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-kube-api-access-q7knj\") pod \"oauth-openshift-558db77b4-22qbz\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.049071 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.049101 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.049866 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.549848513 +0000 UTC m=+184.360675127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.053468 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjjgq\" (UniqueName: \"kubernetes.io/projected/796e9631-3388-48b1-8675-3fbc4b6e435d-kube-api-access-vjjgq\") pod \"controller-manager-879f6c89f-gsl8w\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.070618 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.085373 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgwjg\" (UniqueName: \"kubernetes.io/projected/bad776f4-e24b-41f1-88d8-2b1fe6258783-kube-api-access-tgwjg\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9wj4\" (UID: \"bad776f4-e24b-41f1-88d8-2b1fe6258783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.085373 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qjv\" (UniqueName: \"kubernetes.io/projected/5d015dd8-56c9-4f61-b133-4951cda91ca5-kube-api-access-j4qjv\") pod \"machine-api-operator-5694c8668f-pw64v\" (UID: \"5d015dd8-56c9-4f61-b133-4951cda91ca5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.088591 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.092147 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f94s2\" (UniqueName: \"kubernetes.io/projected/64bd7009-a06a-43e1-b265-3ea78b5801b9-kube-api-access-f94s2\") pod \"authentication-operator-69f744f599-qsj62\" (UID: \"64bd7009-a06a-43e1-b265-3ea78b5801b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.117905 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.128777 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/727302ed-b5c0-49b7-be17-7da9387c16c3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.129546 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.130054 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-trusted-ca\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.140482 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09191eec-0be2-4c45-9249-6c8081d6108a-metrics-tls\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.149242 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.150712 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.150908 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.650870046 +0000 UTC m=+184.461696480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.151693 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.151925 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-k9bv4"] Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.152205 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.652182045 +0000 UTC m=+184.463008649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.158422 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" event={"ID":"36efccb8-7513-43d0-8952-d7ad9546da8e","Type":"ContainerStarted","Data":"837f0748289c32dab50176203bf6899f152b0041109c72b2c3ad10609d715f51"} Feb 26 11:13:58 crc kubenswrapper[4699]: W0226 11:13:58.159744 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34163385_0c26_4d54_a06a_11f9ef09901d.slice/crio-01184b470301f94db3f051ddb11b4c9962804b7dc43b0df93c9a87ed950eca02 WatchSource:0}: Error finding container 01184b470301f94db3f051ddb11b4c9962804b7dc43b0df93c9a87ed950eca02: Status 404 returned error can't find the container with id 01184b470301f94db3f051ddb11b4c9962804b7dc43b0df93c9a87ed950eca02 Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.185332 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09191eec-0be2-4c45-9249-6c8081d6108a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.227669 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.237674 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679ffaa0-41b8-4638-8b4c-4c1f424812e4-config\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.252826 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.253009 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.75298238 +0000 UTC m=+184.563808814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.253809 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.254087 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.754078382 +0000 UTC m=+184.564904816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.263245 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb2g6\" (UniqueName: \"kubernetes.io/projected/727302ed-b5c0-49b7-be17-7da9387c16c3-kube-api-access-sb2g6\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.269972 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.280699 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/679ffaa0-41b8-4638-8b4c-4c1f424812e4-machine-approver-tls\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.287926 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.298081 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/679ffaa0-41b8-4638-8b4c-4c1f424812e4-auth-proxy-config\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.334830 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.338861 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09191eec-0be2-4c45-9249-6c8081d6108a-trusted-ca\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.348954 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.354515 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.354661 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.854628661 +0000 UTC m=+184.665455095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.354938 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.355153 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.355505 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.855490516 +0000 UTC m=+184.666316950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.366289 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7d5fe0-885a-44e4-bacf-19bceeea178f-serving-cert\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.409053 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.420072 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cd7cbed-d0bf-4d8c-933c-4d031170288a-config\") pod \"kube-apiserver-operator-766d6c64bb-vmxjr\" (UID: \"8cd7cbed-d0bf-4d8c-933c-4d031170288a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.445921 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/727302ed-b5c0-49b7-be17-7da9387c16c3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.448211 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.452576 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/727302ed-b5c0-49b7-be17-7da9387c16c3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vvgmh\" (UID: \"727302ed-b5c0-49b7-be17-7da9387c16c3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.455779 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.456565 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:58.9565475 +0000 UTC m=+184.767373934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.475455 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.481598 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0c7d5fe0-885a-44e4-bacf-19bceeea178f-trusted-ca\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.507620 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.510537 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7d5fe0-885a-44e4-bacf-19bceeea178f-config\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.522718 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7232eb23-31ae-4e72-ae27-c256dc4cac9a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.528163 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.548228 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.554243 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cd7cbed-d0bf-4d8c-933c-4d031170288a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vmxjr\" (UID: \"8cd7cbed-d0bf-4d8c-933c-4d031170288a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.558055 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.558456 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.058441518 +0000 UTC m=+184.869267952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.562995 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trrf7\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-kube-api-access-trrf7\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.563302 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-bound-sa-token\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.568174 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-tls\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.569229 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.573664 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.590651 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.600677 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwpnc\" (UniqueName: \"kubernetes.io/projected/a550b2ea-3ce7-4df3-bbf5-f1025afca8c1-kube-api-access-xwpnc\") pod \"apiserver-76f77b778f-f8s5j\" (UID: \"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1\") " pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.608469 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.617973 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h5hb\" (UniqueName: \"kubernetes.io/projected/afa5e1ce-a457-4771-ab06-2654a7801704-kube-api-access-7h5hb\") pod \"apiserver-7bbb656c7d-tm98c\" (UID: \"afa5e1ce-a457-4771-ab06-2654a7801704\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.628222 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.651014 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.655455 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66m62\" (UniqueName: \"kubernetes.io/projected/744aa737-e6c7-4d6b-ba7d-a9479043ad29-kube-api-access-66m62\") pod \"route-controller-manager-6576b87f9c-fq7g8\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.659807 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.660195 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.160143441 +0000 UTC m=+184.970969875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.660672 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.661425 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.161416269 +0000 UTC m=+184.972242703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.669539 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4jdw\" (UniqueName: \"kubernetes.io/projected/03fd3407-9529-4638-89d6-cfc6b703e510-kube-api-access-j4jdw\") pod \"openshift-config-operator-7777fb866f-vzj5b\" (UID: \"03fd3407-9529-4638-89d6-cfc6b703e510\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.705531 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6jtx\" (UniqueName: \"kubernetes.io/projected/5f6e45f7-93da-46b8-9021-d2500076c385-kube-api-access-r6jtx\") pod \"olm-operator-6b444d44fb-czs8l\" (UID: \"5f6e45f7-93da-46b8-9021-d2500076c385\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.723777 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8mlc\" (UniqueName: \"kubernetes.io/projected/1d3e449f-d082-43cb-951d-53d82fde40ca-kube-api-access-t8mlc\") pod \"package-server-manager-789f6589d5-dk749\" (UID: \"1d3e449f-d082-43cb-951d-53d82fde40ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.740471 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9"] Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.742911 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4ml4\" (UniqueName: \"kubernetes.io/projected/79a9064f-5fcf-42f7-af6f-71aeeb75560e-kube-api-access-l4ml4\") pod \"csi-hostpathplugin-qzphl\" (UID: \"79a9064f-5fcf-42f7-af6f-71aeeb75560e\") " pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.761886 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.762096 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.262064 +0000 UTC m=+185.072890434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.762523 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.762895 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.262882735 +0000 UTC m=+185.073709169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.762927 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k275m\" (UniqueName: \"kubernetes.io/projected/fbc38281-b1a4-4c40-a707-a106b651c107-kube-api-access-k275m\") pod \"openshift-apiserver-operator-796bbdcf4f-tg675\" (UID: \"fbc38281-b1a4-4c40-a707-a106b651c107\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.782309 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqcrg\" (UniqueName: \"kubernetes.io/projected/44832f39-2c56-4669-b328-7e663f6cacdf-kube-api-access-jqcrg\") pod \"packageserver-d55dfcdfc-w7nqx\" (UID: \"44832f39-2c56-4669-b328-7e663f6cacdf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.802145 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2p88\" (UniqueName: \"kubernetes.io/projected/23bae79f-03c7-4710-ac97-25da2c7988c4-kube-api-access-p2p88\") pod \"service-ca-9c57cc56f-zqgj9\" (UID: \"23bae79f-03c7-4710-ac97-25da2c7988c4\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.823840 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlm8k\" (UniqueName: \"kubernetes.io/projected/0aac34b6-aad8-4b68-8180-f68af008611d-kube-api-access-dlm8k\") pod \"service-ca-operator-777779d784-ctfcc\" (UID: \"0aac34b6-aad8-4b68-8180-f68af008611d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" Feb 26 11:13:58 crc kubenswrapper[4699]: W0226 11:13:58.826796 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b070a40_85a6_42e6_a1bd_d834170a9c9c.slice/crio-c2296dd5d7be174ff8b90aed65a4512f5fe3cc9b83f7f09a1988401db2385307 WatchSource:0}: Error finding container c2296dd5d7be174ff8b90aed65a4512f5fe3cc9b83f7f09a1988401db2385307: Status 404 returned error can't find the container with id c2296dd5d7be174ff8b90aed65a4512f5fe3cc9b83f7f09a1988401db2385307 Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.843811 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhksj\" (UniqueName: \"kubernetes.io/projected/89f840f7-d21f-4028-b53d-ed0e2061ff15-kube-api-access-nhksj\") pod \"dns-default-tnwpn\" (UID: \"89f840f7-d21f-4028-b53d-ed0e2061ff15\") " pod="openshift-dns/dns-default-tnwpn" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.858558 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.859058 4699 projected.go:288] Couldn't get configMap openshift-ingress/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.859077 4699 projected.go:194] Error preparing data for projected volume kube-api-access-wks59 for pod openshift-ingress/router-default-5444994796-xm88w: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.859146 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4a97e310-1811-48a9-a31a-eb9a0321d280-kube-api-access-wks59 podName:4a97e310-1811-48a9-a31a-eb9a0321d280 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.359109166 +0000 UTC m=+185.169935600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wks59" (UniqueName: "kubernetes.io/projected/4a97e310-1811-48a9-a31a-eb9a0321d280-kube-api-access-wks59") pod "router-default-5444994796-xm88w" (UID: "4a97e310-1811-48a9-a31a-eb9a0321d280") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.863720 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.864287 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.864411 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.364391872 +0000 UTC m=+185.175218306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.864754 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.864914 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnjhv\" (UniqueName: \"kubernetes.io/projected/5f8a28b8-c47b-4288-877f-8e90a3b581b5-kube-api-access-bnjhv\") pod \"collect-profiles-29535060-f97rd\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.865025 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.36501683 +0000 UTC m=+185.175843264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.865078 4699 request.go:700] Waited for 1.014399349s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.883399 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9qrl\" (UniqueName: \"kubernetes.io/projected/1a9875bc-9f2e-4887-8dc6-a00cc789eb4a-kube-api-access-l9qrl\") pod \"catalog-operator-68c6474976-xvgnb\" (UID: \"1a9875bc-9f2e-4887-8dc6-a00cc789eb4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.889194 4699 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.889251 4699 projected.go:194] Error preparing data for projected volume kube-api-access-6wrjv for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.889332 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0ecd5cc-b456-4d69-897c-5fd543842440-kube-api-access-6wrjv podName:e0ecd5cc-b456-4d69-897c-5fd543842440 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.389309087 +0000 UTC m=+185.200135591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6wrjv" (UniqueName: "kubernetes.io/projected/e0ecd5cc-b456-4d69-897c-5fd543842440-kube-api-access-6wrjv") pod "kube-storage-version-migrator-operator-b67b599dd-gngtb" (UID: "e0ecd5cc-b456-4d69-897c-5fd543842440") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.895154 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.905553 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.906972 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckkpm\" (UniqueName: \"kubernetes.io/projected/6b9ab605-cf5d-43ea-9554-20032a52e23c-kube-api-access-ckkpm\") pod \"machine-config-server-rlx7c\" (UID: \"6b9ab605-cf5d-43ea-9554-20032a52e23c\") " pod="openshift-machine-config-operator/machine-config-server-rlx7c" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.909369 4699 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.909400 4699 projected.go:194] Error preparing data for projected volume kube-api-access-7frcg for pod openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.909491 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af5429d7-39d0-4b17-8219-21c8491384ae-kube-api-access-7frcg podName:af5429d7-39d0-4b17-8219-21c8491384ae nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.409463582 +0000 UTC m=+185.220290086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7frcg" (UniqueName: "kubernetes.io/projected/af5429d7-39d0-4b17-8219-21c8491384ae-kube-api-access-7frcg") pod "migrator-59844c95c7-k6wtb" (UID: "af5429d7-39d0-4b17-8219-21c8491384ae") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.917540 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.924350 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwq64\" (UniqueName: \"kubernetes.io/projected/5cc10041-704b-4b00-8e4e-369103434b64-kube-api-access-bwq64\") pod \"marketplace-operator-79b997595-cd5qf\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.927776 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.930146 4699 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.930169 4699 projected.go:194] Error preparing data for projected volume kube-api-access-d8m6h for pod openshift-etcd-operator/etcd-operator-b45778765-j6vfb: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.930226 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fab52d01-f907-44cb-8d5f-162116d75fc9-kube-api-access-d8m6h podName:fab52d01-f907-44cb-8d5f-162116d75fc9 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.430207245 +0000 UTC m=+185.241033669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d8m6h" (UniqueName: "kubernetes.io/projected/fab52d01-f907-44cb-8d5f-162116d75fc9-kube-api-access-d8m6h") pod "etcd-operator-b45778765-j6vfb" (UID: "fab52d01-f907-44cb-8d5f-162116d75fc9") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.938297 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.948512 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhkh7\" (UniqueName: \"kubernetes.io/projected/00fcad37-801c-4a2c-8599-dabd0f36db6d-kube-api-access-jhkh7\") pod \"ingress-canary-r2phw\" (UID: \"00fcad37-801c-4a2c-8599-dabd0f36db6d\") " pod="openshift-ingress-canary/ingress-canary-r2phw" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.952718 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tnwpn" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.966174 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.966468 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.466443625 +0000 UTC m=+185.277270059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.966989 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:58 crc kubenswrapper[4699]: E0226 11:13:58.967466 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.467454005 +0000 UTC m=+185.278280439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.968599 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.971508 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qzphl" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.973254 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c57mh\" (UniqueName: \"kubernetes.io/projected/b4f243e8-e08c-420e-a78b-02e6a14bf5fe-kube-api-access-c57mh\") pod \"machine-config-operator-74547568cd-pxsr8\" (UID: \"b4f243e8-e08c-420e-a78b-02e6a14bf5fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.981154 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rlx7c" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.987531 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 11:13:58 crc kubenswrapper[4699]: I0226 11:13:58.988657 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-r2phw" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.007641 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.028406 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.048023 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.054084 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tcnxt" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.067898 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.068449 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.068615 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.568600201 +0000 UTC m=+185.379426625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.070882 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.086008 4699 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.086075 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.101978 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.113246 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.116593 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.126052 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zqgj9"] Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.126581 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.126879 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.146529 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.146606 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.155763 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.160840 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l"] Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.166957 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.171971 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.172153 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.172442 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.672426966 +0000 UTC m=+185.483253400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.173718 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.175153 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" event={"ID":"6b070a40-85a6-42e6-a1bd-d834170a9c9c","Type":"ContainerStarted","Data":"c2296dd5d7be174ff8b90aed65a4512f5fe3cc9b83f7f09a1988401db2385307"} Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.175414 4699 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.175452 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.176038 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675"] Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.177315 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" event={"ID":"23bae79f-03c7-4710-ac97-25da2c7988c4","Type":"ContainerStarted","Data":"3471c9e71d3cd8d924d552747899f6cd1e83bdb74118e8458c7a8e924f4465a9"} Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.178667 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" event={"ID":"34163385-0c26-4d54-a06a-11f9ef09901d","Type":"ContainerStarted","Data":"01184b470301f94db3f051ddb11b4c9962804b7dc43b0df93c9a87ed950eca02"} Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.183943 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.187179 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.199438 4699 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.208157 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.213717 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd"] Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.221173 4699 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.221216 4699 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.221287 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8cd7cbed-d0bf-4d8c-933c-4d031170288a-kube-api-access podName:8cd7cbed-d0bf-4d8c-933c-4d031170288a nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.721264078 +0000 UTC m=+185.532090522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/8cd7cbed-d0bf-4d8c-933c-4d031170288a-kube-api-access") pod "kube-apiserver-operator-766d6c64bb-vmxjr" (UID: "8cd7cbed-d0bf-4d8c-933c-4d031170288a") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.237034 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 11:13:59 crc kubenswrapper[4699]: W0226 11:13:59.246773 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f6e45f7_93da_46b8_9021_d2500076c385.slice/crio-518234f9f0a4807650213e3d0e0a0e32a30602572f66d2aa83784c4337aef135 WatchSource:0}: Error finding container 518234f9f0a4807650213e3d0e0a0e32a30602572f66d2aa83784c4337aef135: Status 404 returned error can't find the container with id 518234f9f0a4807650213e3d0e0a0e32a30602572f66d2aa83784c4337aef135 Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.251316 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.253193 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.268202 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.273463 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.273873 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.773852821 +0000 UTC m=+185.584679255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.287331 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.301355 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6956c039-cf77-429b-8f7f-f93ba195d321-metrics-certs\") pod \"network-metrics-daemon-v5ctv\" (UID: \"6956c039-cf77-429b-8f7f-f93ba195d321\") " pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.308996 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.310569 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.327106 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.337185 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx"] Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.368323 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.376990 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.377060 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.377239 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wks59\" (UniqueName: \"kubernetes.io/projected/4a97e310-1811-48a9-a31a-eb9a0321d280-kube-api-access-wks59\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.377448 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.877424319 +0000 UTC m=+185.688250943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.382256 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wks59\" (UniqueName: \"kubernetes.io/projected/4a97e310-1811-48a9-a31a-eb9a0321d280-kube-api-access-wks59\") pod \"router-default-5444994796-xm88w\" (UID: \"4a97e310-1811-48a9-a31a-eb9a0321d280\") " pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.387266 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749"] Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.387676 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.390431 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.408760 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.415736 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.429153 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.430481 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:13:59 crc kubenswrapper[4699]: W0226 11:13:59.447797 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44832f39_2c56_4669_b328_7e663f6cacdf.slice/crio-dcfcf531fe129586ab3d0914bc42572e1dd887de749c4a861cb770ab36f5adda WatchSource:0}: Error finding container dcfcf531fe129586ab3d0914bc42572e1dd887de749c4a861cb770ab36f5adda: Status 404 returned error can't find the container with id dcfcf531fe129586ab3d0914bc42572e1dd887de749c4a861cb770ab36f5adda Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.450423 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.451513 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.468208 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc"] Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.475413 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.477801 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.478043 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7frcg\" (UniqueName: \"kubernetes.io/projected/af5429d7-39d0-4b17-8219-21c8491384ae-kube-api-access-7frcg\") pod \"migrator-59844c95c7-k6wtb\" (UID: \"af5429d7-39d0-4b17-8219-21c8491384ae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.478096 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8m6h\" (UniqueName: \"kubernetes.io/projected/fab52d01-f907-44cb-8d5f-162116d75fc9-kube-api-access-d8m6h\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.478171 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wrjv\" (UniqueName: \"kubernetes.io/projected/e0ecd5cc-b456-4d69-897c-5fd543842440-kube-api-access-6wrjv\") pod \"kube-storage-version-migrator-operator-b67b599dd-gngtb\" (UID: \"e0ecd5cc-b456-4d69-897c-5fd543842440\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.478661 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.978633487 +0000 UTC m=+185.789459961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.484535 4699 projected.go:194] Error preparing data for projected volume kube-api-access-t7chv for pod openshift-cluster-machine-approver/machine-approver-56656f9798-p742p: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.484643 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/679ffaa0-41b8-4638-8b4c-4c1f424812e4-kube-api-access-t7chv podName:679ffaa0-41b8-4638-8b4c-4c1f424812e4 nodeName:}" failed. No retries permitted until 2026-02-26 11:13:59.984619514 +0000 UTC m=+185.795445948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t7chv" (UniqueName: "kubernetes.io/projected/679ffaa0-41b8-4638-8b4c-4c1f424812e4-kube-api-access-t7chv") pod "machine-approver-56656f9798-p742p" (UID: "679ffaa0-41b8-4638-8b4c-4c1f424812e4") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.489010 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.505322 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8m6h\" (UniqueName: \"kubernetes.io/projected/fab52d01-f907-44cb-8d5f-162116d75fc9-kube-api-access-d8m6h\") pod \"etcd-operator-b45778765-j6vfb\" (UID: \"fab52d01-f907-44cb-8d5f-162116d75fc9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.509879 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.513259 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7frcg\" (UniqueName: \"kubernetes.io/projected/af5429d7-39d0-4b17-8219-21c8491384ae-kube-api-access-7frcg\") pod \"migrator-59844c95c7-k6wtb\" (UID: \"af5429d7-39d0-4b17-8219-21c8491384ae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.519138 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpx8q\" (UniqueName: \"kubernetes.io/projected/09191eec-0be2-4c45-9249-6c8081d6108a-kube-api-access-lpx8q\") pod \"ingress-operator-5b745b69d9-wcnnr\" (UID: \"09191eec-0be2-4c45-9249-6c8081d6108a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.521357 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wrjv\" (UniqueName: \"kubernetes.io/projected/e0ecd5cc-b456-4d69-897c-5fd543842440-kube-api-access-6wrjv\") pod \"kube-storage-version-migrator-operator-b67b599dd-gngtb\" (UID: \"e0ecd5cc-b456-4d69-897c-5fd543842440\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.525410 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-r2phw"] Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.528489 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcndj\" (UniqueName: \"kubernetes.io/projected/0c7d5fe0-885a-44e4-bacf-19bceeea178f-kube-api-access-rcndj\") pod \"console-operator-58897d9998-hzqgp\" (UID: \"0c7d5fe0-885a-44e4-bacf-19bceeea178f\") " pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.581678 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.582470 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:00.082442992 +0000 UTC m=+185.893269606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.607804 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.614413 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v5ctv" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.647764 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.648804 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.672257 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh"] Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.682949 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.683498 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:00.183479285 +0000 UTC m=+185.994305719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.686852 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.687920 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" Feb 26 11:13:59 crc kubenswrapper[4699]: W0226 11:13:59.697868 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00fcad37_801c_4a2c_8599_dabd0f36db6d.slice/crio-83e0b6c3f4ca93790fd661d1c95c61e7c1263345d7f3b26630b620b6112340e3 WatchSource:0}: Error finding container 83e0b6c3f4ca93790fd661d1c95c61e7c1263345d7f3b26630b620b6112340e3: Status 404 returned error can't find the container with id 83e0b6c3f4ca93790fd661d1c95c61e7c1263345d7f3b26630b620b6112340e3 Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.712602 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.717572 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.749817 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.755523 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.776447 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.784927 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cd7cbed-d0bf-4d8c-933c-4d031170288a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vmxjr\" (UID: \"8cd7cbed-d0bf-4d8c-933c-4d031170288a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.785016 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.785488 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:00.285473787 +0000 UTC m=+186.096300221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.786033 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.788513 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.796681 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.800238 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cd7cbed-d0bf-4d8c-933c-4d031170288a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vmxjr\" (UID: \"8cd7cbed-d0bf-4d8c-933c-4d031170288a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.889290 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.889521 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:00.389489678 +0000 UTC m=+186.200316122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.890007 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.890610 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:00.39059438 +0000 UTC m=+186.201420814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.907502 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.915062 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.991617 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.991926 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7chv\" (UniqueName: \"kubernetes.io/projected/679ffaa0-41b8-4638-8b4c-4c1f424812e4-kube-api-access-t7chv\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:13:59 crc kubenswrapper[4699]: E0226 11:13:59.994296 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:00.494261011 +0000 UTC m=+186.305087605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:13:59 crc kubenswrapper[4699]: I0226 11:13:59.997733 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7chv\" (UniqueName: \"kubernetes.io/projected/679ffaa0-41b8-4638-8b4c-4c1f424812e4-kube-api-access-t7chv\") pod \"machine-approver-56656f9798-p742p\" (UID: \"679ffaa0-41b8-4638-8b4c-4c1f424812e4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.005105 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-22qbz"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.057489 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.094384 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:00 crc kubenswrapper[4699]: E0226 11:14:00.094889 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:00.594869941 +0000 UTC m=+186.405696375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.097368 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qzphl"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.153859 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gsl8w"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.153912 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tnwpn"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.160373 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qsj62"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.160820 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xbpcs"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.168737 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535074-bjfld"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.170323 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535074-bjfld" Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.171068 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535074-bjfld"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.177225 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.188257 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.197612 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:00 crc kubenswrapper[4699]: E0226 11:14:00.198079 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:00.698059438 +0000 UTC m=+186.508885872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.212399 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" event={"ID":"1d3e449f-d082-43cb-951d-53d82fde40ca","Type":"ContainerStarted","Data":"317c5b2414bf1b5b19ec4ab8c5f7d2097d6a25f530ff2a0a2426b0eae6da2595"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.214678 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" event={"ID":"44832f39-2c56-4669-b328-7e663f6cacdf","Type":"ContainerStarted","Data":"dcfcf531fe129586ab3d0914bc42572e1dd887de749c4a861cb770ab36f5adda"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.217822 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-r2phw" event={"ID":"00fcad37-801c-4a2c-8599-dabd0f36db6d","Type":"ContainerStarted","Data":"83e0b6c3f4ca93790fd661d1c95c61e7c1263345d7f3b26630b620b6112340e3"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.220346 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" event={"ID":"9c1f6032-b723-4cb3-a93b-73d053eaf822","Type":"ContainerStarted","Data":"16e3e5ef471b13f7f9217a4479069861dfc04956adc76207b12f862e2b4b3359"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.221095 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" event={"ID":"0aac34b6-aad8-4b68-8180-f68af008611d","Type":"ContainerStarted","Data":"9c569a7b4a5730a6fd5f622d9d8580518b4d59861706d52f5fa570b1144b6f7d"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.221900 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" event={"ID":"5f6e45f7-93da-46b8-9021-d2500076c385","Type":"ContainerStarted","Data":"518234f9f0a4807650213e3d0e0a0e32a30602572f66d2aa83784c4337aef135"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.223182 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" event={"ID":"36efccb8-7513-43d0-8952-d7ad9546da8e","Type":"ContainerStarted","Data":"5ef61d4e75602d31a9e85e25cd1cda7253e5ef4c2b643144b5584ca6ef9e0885"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.224010 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" event={"ID":"fbc38281-b1a4-4c40-a707-a106b651c107","Type":"ContainerStarted","Data":"b265116c888ddda8a0c13680e3b228945374fe791d1bcfef434de7fce7ea1caa"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.236025 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" event={"ID":"34163385-0c26-4d54-a06a-11f9ef09901d","Type":"ContainerStarted","Data":"7fffc66eeb4956b76f91d87b3eef8ab447b9fb0a0df8dbccb897804b145f18ab"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.240529 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" event={"ID":"5f8a28b8-c47b-4288-877f-8e90a3b581b5","Type":"ContainerStarted","Data":"9cc8202a0a693b54f9a7afa4f72146520cc57d28a34110bea4d4992553af18b6"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.241547 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" event={"ID":"6b070a40-85a6-42e6-a1bd-d834170a9c9c","Type":"ContainerStarted","Data":"508efe6a42d61af98e55fb58855c1da8e28f40533928f5ed84603c8a6eae2c6e"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.248417 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.248615 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rlx7c" event={"ID":"6b9ab605-cf5d-43ea-9554-20032a52e23c","Type":"ContainerStarted","Data":"13d23702e952465cfb024c403f3a037dbd8825ad6aadefa49886bdf076336718"} Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.252443 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.254355 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" event={"ID":"460579d9-ed16-49b7-a588-ef20ceb9bbf4","Type":"ContainerStarted","Data":"63bb7c60f2814c3bca3cd8b3c25228a93bf6b0e1f5746a31ee1416ba021a86ef"} Feb 26 11:14:00 crc kubenswrapper[4699]: W0226 11:14:00.278295 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a9875bc_9f2e_4887_8dc6_a00cc789eb4a.slice/crio-2596306f1087bf01b8a58fd8b0bb65d12065ae02e0851934fb89d1efcdbc1abe WatchSource:0}: Error finding container 2596306f1087bf01b8a58fd8b0bb65d12065ae02e0851934fb89d1efcdbc1abe: Status 404 returned error can't find the container with id 2596306f1087bf01b8a58fd8b0bb65d12065ae02e0851934fb89d1efcdbc1abe Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.300231 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq8lx\" (UniqueName: \"kubernetes.io/projected/30d444da-9127-459c-97c6-cdcff5b20e67-kube-api-access-qq8lx\") pod \"auto-csr-approver-29535074-bjfld\" (UID: \"30d444da-9127-459c-97c6-cdcff5b20e67\") " pod="openshift-infra/auto-csr-approver-29535074-bjfld" Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.300289 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:00 crc kubenswrapper[4699]: E0226 11:14:00.302494 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:00.802477201 +0000 UTC m=+186.613303635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.404368 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:00 crc kubenswrapper[4699]: E0226 11:14:00.404581 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:00.904547794 +0000 UTC m=+186.715374228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.404703 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq8lx\" (UniqueName: \"kubernetes.io/projected/30d444da-9127-459c-97c6-cdcff5b20e67-kube-api-access-qq8lx\") pod \"auto-csr-approver-29535074-bjfld\" (UID: \"30d444da-9127-459c-97c6-cdcff5b20e67\") " pod="openshift-infra/auto-csr-approver-29535074-bjfld" Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.404740 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:00 crc kubenswrapper[4699]: E0226 11:14:00.405051 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:00.905040499 +0000 UTC m=+186.715866933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.460665 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq8lx\" (UniqueName: \"kubernetes.io/projected/30d444da-9127-459c-97c6-cdcff5b20e67-kube-api-access-qq8lx\") pod \"auto-csr-approver-29535074-bjfld\" (UID: \"30d444da-9127-459c-97c6-cdcff5b20e67\") " pod="openshift-infra/auto-csr-approver-29535074-bjfld" Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.470470 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pw64v"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.474077 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tcnxt"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.476342 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hnsh7"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.511465 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:00 crc kubenswrapper[4699]: E0226 11:14:00.511999 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.011982667 +0000 UTC m=+186.822809101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.552230 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535074-bjfld" Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.563839 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.574003 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cd5qf"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.574051 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.613044 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:00 crc kubenswrapper[4699]: E0226 11:14:00.613428 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.113414362 +0000 UTC m=+186.924240796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.714316 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:00 crc kubenswrapper[4699]: E0226 11:14:00.714707 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.214692412 +0000 UTC m=+187.025518846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.743288 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.749207 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.757347 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.761502 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-f8s5j"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.766347 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.816050 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:00 crc kubenswrapper[4699]: E0226 11:14:00.817163 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.317145817 +0000 UTC m=+187.127972251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.845696 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.875066 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b"] Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.926240 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:00 crc kubenswrapper[4699]: E0226 11:14:00.926608 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.426593718 +0000 UTC m=+187.237420142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:00 crc kubenswrapper[4699]: I0226 11:14:00.953853 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v5ctv"] Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.028239 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.028632 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.52861707 +0000 UTC m=+187.339443504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.063221 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr"] Feb 26 11:14:01 crc kubenswrapper[4699]: W0226 11:14:01.066685 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03fd3407_9529_4638_89d6_cfc6b703e510.slice/crio-c353d9bed35789a32b2ea23a2f94f9c2e40f463057a2ff1c95e594e8b545182a WatchSource:0}: Error finding container c353d9bed35789a32b2ea23a2f94f9c2e40f463057a2ff1c95e594e8b545182a: Status 404 returned error can't find the container with id c353d9bed35789a32b2ea23a2f94f9c2e40f463057a2ff1c95e594e8b545182a Feb 26 11:14:01 crc kubenswrapper[4699]: W0226 11:14:01.101837 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6956c039_cf77_429b_8f7f_f93ba195d321.slice/crio-2d2c52d524532517fe59c7ad81e75737aa073b7328d6c10323d7e6a7fd831621 WatchSource:0}: Error finding container 2d2c52d524532517fe59c7ad81e75737aa073b7328d6c10323d7e6a7fd831621: Status 404 returned error can't find the container with id 2d2c52d524532517fe59c7ad81e75737aa073b7328d6c10323d7e6a7fd831621 Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.129503 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.129640 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.629612332 +0000 UTC m=+187.440438766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.129797 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.130260 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.630241291 +0000 UTC m=+187.441067775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.207241 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j6vfb"] Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.223711 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb"] Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.231321 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr"] Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.232690 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.232859 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.73283318 +0000 UTC m=+187.543659624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.233014 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.233399 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.733385616 +0000 UTC m=+187.544212050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.248530 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hzqgp"] Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.294358 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" event={"ID":"9c1f6032-b723-4cb3-a93b-73d053eaf822","Type":"ContainerStarted","Data":"ca14ca703cbd66fd56d88323124a5c239a71a62cb0959b8249347a60a5a6bd7a"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.296696 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" event={"ID":"afa5e1ce-a457-4771-ab06-2654a7801704","Type":"ContainerStarted","Data":"28d18ab4af63ee8cd3146ef370273dcf2ab66f715c42eb29a815f51c721d1a2b"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.304995 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hnsh7" event={"ID":"e6bdcf19-db76-497c-a2fe-a6de38fae724","Type":"ContainerStarted","Data":"70e987324485f04a528051e1c4554d8c5806c907f67af5218c0970ab13cf9e3b"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.310935 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" event={"ID":"727302ed-b5c0-49b7-be17-7da9387c16c3","Type":"ContainerStarted","Data":"37d1d479ad57c1095d857b9bb52d51e059395ee6131cd8e758475e415c3fe86e"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.321105 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" event={"ID":"5f6e45f7-93da-46b8-9021-d2500076c385","Type":"ContainerStarted","Data":"ac7df773855e739622ed1fc070613c600351a5438de68a58ec6d7302b25a923f"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.321644 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.324198 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" event={"ID":"796e9631-3388-48b1-8675-3fbc4b6e435d","Type":"ContainerStarted","Data":"4e6b4035a7e79b8d64117aea9e3cf6e2de88e935585f4e47f14b7523b0476a41"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.324434 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" event={"ID":"796e9631-3388-48b1-8675-3fbc4b6e435d","Type":"ContainerStarted","Data":"d46528a0707304b437a71f9c1955cd18d8b2071a99d85c6b9da1246d823f4c34"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.326174 4699 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-czs8l container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.326237 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" podUID="5f6e45f7-93da-46b8-9021-d2500076c385" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 26 11:14:01 crc kubenswrapper[4699]: W0226 11:14:01.328509 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfab52d01_f907_44cb_8d5f_162116d75fc9.slice/crio-7e76c7fe195b5c4a77f4cf9d0dcbabd8f147496fccdf7af35990ca526dc334e9 WatchSource:0}: Error finding container 7e76c7fe195b5c4a77f4cf9d0dcbabd8f147496fccdf7af35990ca526dc334e9: Status 404 returned error can't find the container with id 7e76c7fe195b5c4a77f4cf9d0dcbabd8f147496fccdf7af35990ca526dc334e9 Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.331811 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" event={"ID":"744aa737-e6c7-4d6b-ba7d-a9479043ad29","Type":"ContainerStarted","Data":"3f258b9ae41f11af5114ab5232e03c4aa9dff40c08fe1e6fde31d40c3ec891ec"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.333933 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.334194 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.834150421 +0000 UTC m=+187.644976865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: W0226 11:14:01.334257 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ecd5cc_b456_4d69_897c_5fd543842440.slice/crio-9488713a40a5a4743e81c00be9ca4d60ac9ec1a1f8456efac6adb5d8c076245a WatchSource:0}: Error finding container 9488713a40a5a4743e81c00be9ca4d60ac9ec1a1f8456efac6adb5d8c076245a: Status 404 returned error can't find the container with id 9488713a40a5a4743e81c00be9ca4d60ac9ec1a1f8456efac6adb5d8c076245a Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.334404 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.335139 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.835091239 +0000 UTC m=+187.645917673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: W0226 11:14:01.360760 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09191eec_0be2_4c45_9249_6c8081d6108a.slice/crio-e8aa800314c45bdd20aa1e8f81dcaae8818e4823d3d14052fbf7e64cff8fc222 WatchSource:0}: Error finding container e8aa800314c45bdd20aa1e8f81dcaae8818e4823d3d14052fbf7e64cff8fc222: Status 404 returned error can't find the container with id e8aa800314c45bdd20aa1e8f81dcaae8818e4823d3d14052fbf7e64cff8fc222 Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.367060 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f8rd9" podStartSLOduration=135.367033882 podStartE2EDuration="2m15.367033882s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:01.359654774 +0000 UTC m=+187.170481208" watchObservedRunningTime="2026-02-26 11:14:01.367033882 +0000 UTC m=+187.177860326" Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.377726 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" event={"ID":"64bd7009-a06a-43e1-b265-3ea78b5801b9","Type":"ContainerStarted","Data":"546f4e9939bdcb6b6a69bb68130d2cc00c06f5fb76617988db3d5893ca3c3033"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.377788 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" event={"ID":"64bd7009-a06a-43e1-b265-3ea78b5801b9","Type":"ContainerStarted","Data":"461c75688cba99d404711ec29bfef434c23a4cfc300f855eaa6bacc80a839a48"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.379771 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xm88w" event={"ID":"4a97e310-1811-48a9-a31a-eb9a0321d280","Type":"ContainerStarted","Data":"5b91a295abc1a424ee2421a049bdfa52ce074ce87f531c64227c3c3c8d36a6a3"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.381444 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb" event={"ID":"af5429d7-39d0-4b17-8219-21c8491384ae","Type":"ContainerStarted","Data":"0e4ca5729966bb0667537e4367e68fd53633c240e0f3a0334bb9d0da3844c7d8"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.383569 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" event={"ID":"bad776f4-e24b-41f1-88d8-2b1fe6258783","Type":"ContainerStarted","Data":"fa85da4b93436fb19c42dc8f6a401ea56d65809f1ba71d6912b236113b2c8e20"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.386949 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" event={"ID":"5cc10041-704b-4b00-8e4e-369103434b64","Type":"ContainerStarted","Data":"be07ebbed72d10e6a52397198b9b567e946941b2a2ee6b1a35e4358ea9958b9f"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.392729 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" event={"ID":"89928475-c3fb-415f-a244-6292dc8adc33","Type":"ContainerStarted","Data":"eb58766da6db5ad222a77172d145c88eee88ef68556793b7b04b904329008f36"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.410805 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" event={"ID":"8cd7cbed-d0bf-4d8c-933c-4d031170288a","Type":"ContainerStarted","Data":"59dfddc3a12d600de8024a8dc9e456537fbd7f53513a49c25752922952cca5ef"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.439682 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.440091 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:01.940045248 +0000 UTC m=+187.750871682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.444888 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" podStartSLOduration=134.44487007 podStartE2EDuration="2m14.44487007s" podCreationTimestamp="2026-02-26 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:01.400205422 +0000 UTC m=+187.211031856" watchObservedRunningTime="2026-02-26 11:14:01.44487007 +0000 UTC m=+187.255696524" Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.454232 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" event={"ID":"61a1581f-5367-4535-99bc-3f28547ab766","Type":"ContainerStarted","Data":"74cac48911929390e354aa36a7f066350a1dc8abd1a812803d29897753864181"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.459975 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" event={"ID":"44832f39-2c56-4669-b328-7e663f6cacdf","Type":"ContainerStarted","Data":"e5ba7421b7ee0eabc1adc04968b758675d888f789421bf4eac320c3bfbc2a860"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.460819 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.461856 4699 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-w7nqx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.461886 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" podUID="44832f39-2c56-4669-b328-7e663f6cacdf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.464666 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" event={"ID":"5f8a28b8-c47b-4288-877f-8e90a3b581b5","Type":"ContainerStarted","Data":"61a2c48ee6bf74ea4766fbbb38a98752e4fc1a270493117d88d14b6af7b2c988"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.483877 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" podStartSLOduration=134.483862111 podStartE2EDuration="2m14.483862111s" podCreationTimestamp="2026-02-26 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:01.481900764 +0000 UTC m=+187.292727218" watchObservedRunningTime="2026-02-26 11:14:01.483862111 +0000 UTC m=+187.294688545" Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.489030 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tnwpn" event={"ID":"89f840f7-d21f-4028-b53d-ed0e2061ff15","Type":"ContainerStarted","Data":"0e40d9202d784aea686c0e49c7ec729c734c46a5b71cd5e4083460e2aac9610a"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.500734 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qzphl" event={"ID":"79a9064f-5fcf-42f7-af6f-71aeeb75560e","Type":"ContainerStarted","Data":"b0dae2453795efff0471c82ab81da1da672c09bf5a5f39865ea967629840dfc5"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.508882 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" event={"ID":"679ffaa0-41b8-4638-8b4c-4c1f424812e4","Type":"ContainerStarted","Data":"0ad7548462393f609cec4e6348b963e054d02e1bfee6adfff9394544e4d429bf"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.518423 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" event={"ID":"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1","Type":"ContainerStarted","Data":"3f775704b88aa7370619139cbc98bee3d682cdf286fe1a2c9be29db696c38016"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.524042 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" event={"ID":"23bae79f-03c7-4710-ac97-25da2c7988c4","Type":"ContainerStarted","Data":"354a6d0c0425c100273b0e3aa2ab50be9bee531b1d3bb56c3f721024b7d92514"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.526883 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" podStartSLOduration=135.526865361 podStartE2EDuration="2m15.526865361s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:01.524252484 +0000 UTC m=+187.335078918" watchObservedRunningTime="2026-02-26 11:14:01.526865361 +0000 UTC m=+187.337691805" Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.526920 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" event={"ID":"1a9875bc-9f2e-4887-8dc6-a00cc789eb4a","Type":"ContainerStarted","Data":"2596306f1087bf01b8a58fd8b0bb65d12065ae02e0851934fb89d1efcdbc1abe"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.528231 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" event={"ID":"b4f243e8-e08c-420e-a78b-02e6a14bf5fe","Type":"ContainerStarted","Data":"106fcb2e12af20c2b9fe2679e1afb633dc9be2ac23683a5f742b1ed9156b7029"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.529404 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" event={"ID":"6956c039-cf77-429b-8f7f-f93ba195d321","Type":"ContainerStarted","Data":"2d2c52d524532517fe59c7ad81e75737aa073b7328d6c10323d7e6a7fd831621"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.531393 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" event={"ID":"03fd3407-9529-4638-89d6-cfc6b703e510","Type":"ContainerStarted","Data":"c353d9bed35789a32b2ea23a2f94f9c2e40f463057a2ff1c95e594e8b545182a"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.534332 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" event={"ID":"36efccb8-7513-43d0-8952-d7ad9546da8e","Type":"ContainerStarted","Data":"3d61ef78d0d188a8104bb6b9e11972fc2e015d1356eaec6fa7c27c6982fbcadf"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.540987 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.542514 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" event={"ID":"1d3e449f-d082-43cb-951d-53d82fde40ca","Type":"ContainerStarted","Data":"f1623375653186ceeeee97072b6c40764d593d9979efcd32aef8063f9c0bbe28"} Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.542774 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.04275954 +0000 UTC m=+187.853585974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.545442 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" event={"ID":"fbc38281-b1a4-4c40-a707-a106b651c107","Type":"ContainerStarted","Data":"a3681115709c674e5549240fab4b9c4a64d0d58866d20fa14451fcd36adf8436"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.546701 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" event={"ID":"0aac34b6-aad8-4b68-8180-f68af008611d","Type":"ContainerStarted","Data":"aff982394be5aa630df779a0b1c809a74f2b8b80c8796a2d43b3d6187d5bfafc"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.548217 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" event={"ID":"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466","Type":"ContainerStarted","Data":"9d7ac90385fbaeacd88791e44cd5f3dbc802f7727daac69d69660d2d1079d013"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.550650 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" event={"ID":"5d015dd8-56c9-4f61-b133-4951cda91ca5","Type":"ContainerStarted","Data":"ebd3f92a38859a47f362253252572ab4dfa30e38ca7be5dde9c7b9c8d20415d3"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.556331 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tcnxt" event={"ID":"72b1bc55-f48b-4d90-ab02-3a80438096b6","Type":"ContainerStarted","Data":"06f7357a2582fe2f5b7f2bf23417d511d550f2b8ba54d68e0488f4ecdfb3c7b1"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.559512 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zqgj9" podStartSLOduration=134.559499275 podStartE2EDuration="2m14.559499275s" podCreationTimestamp="2026-02-26 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:01.558593698 +0000 UTC m=+187.369420132" watchObservedRunningTime="2026-02-26 11:14:01.559499275 +0000 UTC m=+187.370325709" Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.574448 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" event={"ID":"460579d9-ed16-49b7-a588-ef20ceb9bbf4","Type":"ContainerStarted","Data":"1c18b04fd38d4e5cfb82fc15f10a9055f343d1ab9d91ad12c3161750e4de76b7"} Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.604873 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tg675" podStartSLOduration=135.604849724 podStartE2EDuration="2m15.604849724s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:01.603468863 +0000 UTC m=+187.414295307" watchObservedRunningTime="2026-02-26 11:14:01.604849724 +0000 UTC m=+187.415676158" Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.627854 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535074-bjfld"] Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.644998 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.646457 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.146411851 +0000 UTC m=+187.957238315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.648888 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpntx" podStartSLOduration=135.648867943 podStartE2EDuration="2m15.648867943s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:01.648191943 +0000 UTC m=+187.459018377" watchObservedRunningTime="2026-02-26 11:14:01.648867943 +0000 UTC m=+187.459694377" Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.746570 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.747029 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.247006841 +0000 UTC m=+188.057833305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.806661 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.848421 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.848595 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.348573149 +0000 UTC m=+188.159399583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.848980 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.849730 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.349505966 +0000 UTC m=+188.160332400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.950614 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.950777 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.450752255 +0000 UTC m=+188.261578689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:01 crc kubenswrapper[4699]: I0226 11:14:01.951272 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:01 crc kubenswrapper[4699]: E0226 11:14:01.951697 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.451685723 +0000 UTC m=+188.262512227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.053284 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.053773 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.553738506 +0000 UTC m=+188.364564940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.054248 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.054629 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.554611992 +0000 UTC m=+188.365438426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.157762 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.158109 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.658063086 +0000 UTC m=+188.468889650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.158434 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.159040 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.659031155 +0000 UTC m=+188.469857589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.261953 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.262402 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.762380956 +0000 UTC m=+188.573207390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.363217 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.364615 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.86446106 +0000 UTC m=+188.675287494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.467749 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.468246 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:02.968229194 +0000 UTC m=+188.779055628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.569028 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.569616 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.069593467 +0000 UTC m=+188.880419971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.587165 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hnsh7" event={"ID":"e6bdcf19-db76-497c-a2fe-a6de38fae724","Type":"ContainerStarted","Data":"53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.588709 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" event={"ID":"727302ed-b5c0-49b7-be17-7da9387c16c3","Type":"ContainerStarted","Data":"fe9d1cdec02a8cee8cf95b26e05b856087c021ba99aa27e5af51dcfe0240cf0f"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.590662 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" event={"ID":"5d015dd8-56c9-4f61-b133-4951cda91ca5","Type":"ContainerStarted","Data":"4d27114bb717e86cff571e28c7aba751067298ba670dfaa69e6d2ee075e6f067"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.593381 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" event={"ID":"5cc10041-704b-4b00-8e4e-369103434b64","Type":"ContainerStarted","Data":"2d5a0c0e5922846f3c8cbf86200e16bf7b7b0416026b9755460756c2a821cc04"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.595020 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535074-bjfld" event={"ID":"30d444da-9127-459c-97c6-cdcff5b20e67","Type":"ContainerStarted","Data":"18076c1c5e0cfb7ca48ea66321abbc8359663b222708aa29c8481673d9c4ff5c"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.596448 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tnwpn" event={"ID":"89f840f7-d21f-4028-b53d-ed0e2061ff15","Type":"ContainerStarted","Data":"2318dbfdb102c963d608346d7789eb7d0ad448ba89834a14c0ed9536fff0d574"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.597758 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hzqgp" event={"ID":"0c7d5fe0-885a-44e4-bacf-19bceeea178f","Type":"ContainerStarted","Data":"8e91007fd24b6b235fe00b711c6276d26c8ecc825c1024e041f2ce0b0f9007bd"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.600296 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" event={"ID":"34163385-0c26-4d54-a06a-11f9ef09901d","Type":"ContainerStarted","Data":"b82c4200e29693534baf39c659f06cfff7e11c6e83f1aa3114cbdca96a978e17"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.602288 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" event={"ID":"744aa737-e6c7-4d6b-ba7d-a9479043ad29","Type":"ContainerStarted","Data":"e82195f6e02bee889f35431d957fa456c0b1db807d039ddcd99b290da7bc9288"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.604618 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rlx7c" event={"ID":"6b9ab605-cf5d-43ea-9554-20032a52e23c","Type":"ContainerStarted","Data":"37d92097cdf24750a95f125a3a12ba947a11479227a9d589f2bff87c62cf1dea"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.605859 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ctfcc" podStartSLOduration=135.605843457 podStartE2EDuration="2m15.605843457s" podCreationTimestamp="2026-02-26 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:01.689170353 +0000 UTC m=+187.499996797" watchObservedRunningTime="2026-02-26 11:14:02.605843457 +0000 UTC m=+188.416669891" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.606105 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vvgmh" podStartSLOduration=136.606099195 podStartE2EDuration="2m16.606099195s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:02.605151787 +0000 UTC m=+188.415978221" watchObservedRunningTime="2026-02-26 11:14:02.606099195 +0000 UTC m=+188.416925639" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.613134 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb" event={"ID":"af5429d7-39d0-4b17-8219-21c8491384ae","Type":"ContainerStarted","Data":"eb6b24300a1f0473a2db7791c96c541a2a5af8105921dfceb5d170fda26524a1"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.614384 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" event={"ID":"fab52d01-f907-44cb-8d5f-162116d75fc9","Type":"ContainerStarted","Data":"7e76c7fe195b5c4a77f4cf9d0dcbabd8f147496fccdf7af35990ca526dc334e9"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.621528 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rlx7c" podStartSLOduration=7.621505749 podStartE2EDuration="7.621505749s" podCreationTimestamp="2026-02-26 11:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:02.620005705 +0000 UTC m=+188.430832159" watchObservedRunningTime="2026-02-26 11:14:02.621505749 +0000 UTC m=+188.432332183" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.624341 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" event={"ID":"03fd3407-9529-4638-89d6-cfc6b703e510","Type":"ContainerStarted","Data":"ec8bc8192ce6082446d639b19d1c7574145a66a52226a83d86f89ce7579a3a4e"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.630081 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" event={"ID":"61a1581f-5367-4535-99bc-3f28547ab766","Type":"ContainerStarted","Data":"3de5a5b46da3ed916b21ce510b57be99f1b3297b846ab542ef9a25da569ef185"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.632843 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-r2phw" event={"ID":"00fcad37-801c-4a2c-8599-dabd0f36db6d","Type":"ContainerStarted","Data":"29099d235a86ea4029c6ad27a31f4f0f3f48126cf629096571b61afd018fa9a5"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.634890 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" event={"ID":"e0ecd5cc-b456-4d69-897c-5fd543842440","Type":"ContainerStarted","Data":"9488713a40a5a4743e81c00be9ca4d60ac9ec1a1f8456efac6adb5d8c076245a"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.637070 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" event={"ID":"1a9875bc-9f2e-4887-8dc6-a00cc789eb4a","Type":"ContainerStarted","Data":"2ec7241379b9f3b05752fa70dd15ed6b5a7df760f3526f5163051cfc14d835f1"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.637646 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.638455 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" event={"ID":"09191eec-0be2-4c45-9249-6c8081d6108a","Type":"ContainerStarted","Data":"e8aa800314c45bdd20aa1e8f81dcaae8818e4823d3d14052fbf7e64cff8fc222"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.639160 4699 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xvgnb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.639202 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" podUID="1a9875bc-9f2e-4887-8dc6-a00cc789eb4a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.640044 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" event={"ID":"bad776f4-e24b-41f1-88d8-2b1fe6258783","Type":"ContainerStarted","Data":"f6ef511605018ef6334a323102f99d31a070e7c94cc362d42542c1f9238cf81b"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.644784 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" event={"ID":"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466","Type":"ContainerStarted","Data":"74570cc7e5f47cfb5ae78c7040168924d22c48d5892dd1918f787cd4639c996c"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.647086 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" event={"ID":"89928475-c3fb-415f-a244-6292dc8adc33","Type":"ContainerStarted","Data":"f8083a65bee010aee31678eea79efabcd118304cb7619ff3f9d81c06b0d356ae"} Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.648066 4699 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-w7nqx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.648132 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" podUID="44832f39-2c56-4669-b328-7e663f6cacdf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.648194 4699 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-czs8l container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.648215 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" podUID="5f6e45f7-93da-46b8-9021-d2500076c385" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.648451 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.650324 4699 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-gsl8w container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.650386 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" podUID="796e9631-3388-48b1-8675-3fbc4b6e435d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.650844 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-r2phw" podStartSLOduration=7.650829735 podStartE2EDuration="7.650829735s" podCreationTimestamp="2026-02-26 11:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:02.650520836 +0000 UTC m=+188.461347290" watchObservedRunningTime="2026-02-26 11:14:02.650829735 +0000 UTC m=+188.461656169" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.672960 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.673381 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.17333824 +0000 UTC m=+188.984164834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.694345 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" podStartSLOduration=136.694320819 podStartE2EDuration="2m16.694320819s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:02.693605778 +0000 UTC m=+188.504432232" watchObservedRunningTime="2026-02-26 11:14:02.694320819 +0000 UTC m=+188.505147253" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.714910 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2zshh" podStartSLOduration=136.714876726 podStartE2EDuration="2m16.714876726s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:02.713915588 +0000 UTC m=+188.524742042" watchObservedRunningTime="2026-02-26 11:14:02.714876726 +0000 UTC m=+188.525703160" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.733012 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" podStartSLOduration=135.732984641 podStartE2EDuration="2m15.732984641s" podCreationTimestamp="2026-02-26 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:02.731789526 +0000 UTC m=+188.542615960" watchObservedRunningTime="2026-02-26 11:14:02.732984641 +0000 UTC m=+188.543811075" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.751794 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-qsj62" podStartSLOduration=136.751766245 podStartE2EDuration="2m16.751766245s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:02.750095386 +0000 UTC m=+188.560921830" watchObservedRunningTime="2026-02-26 11:14:02.751766245 +0000 UTC m=+188.562592679" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.767835 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9wj4" podStartSLOduration=136.767809429 podStartE2EDuration="2m16.767809429s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:02.76717853 +0000 UTC m=+188.578004974" watchObservedRunningTime="2026-02-26 11:14:02.767809429 +0000 UTC m=+188.578635863" Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.775513 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.776867 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.276847366 +0000 UTC m=+189.087674000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.877463 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.877620 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.377601471 +0000 UTC m=+189.188427895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.878072 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.878375 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.378366683 +0000 UTC m=+189.189193117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.979153 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.979398 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.479366135 +0000 UTC m=+189.290192579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:02 crc kubenswrapper[4699]: I0226 11:14:02.979514 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:02 crc kubenswrapper[4699]: E0226 11:14:02.979932 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.479921592 +0000 UTC m=+189.290748106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.080211 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:03 crc kubenswrapper[4699]: E0226 11:14:03.080699 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.580679507 +0000 UTC m=+189.391505941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.186959 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:03 crc kubenswrapper[4699]: E0226 11:14:03.187367 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.687351616 +0000 UTC m=+189.498178040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.288415 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:03 crc kubenswrapper[4699]: E0226 11:14:03.288898 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.788878794 +0000 UTC m=+189.599705228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.390615 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:03 crc kubenswrapper[4699]: E0226 11:14:03.391143 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.891122972 +0000 UTC m=+189.701949406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.492184 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:03 crc kubenswrapper[4699]: E0226 11:14:03.492611 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:03.992590368 +0000 UTC m=+189.803416812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.594724 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:03 crc kubenswrapper[4699]: E0226 11:14:03.595951 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:04.095931279 +0000 UTC m=+189.906757713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.679708 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" event={"ID":"b4f243e8-e08c-420e-a78b-02e6a14bf5fe","Type":"ContainerStarted","Data":"418417a4b5a05608ddf0c9f7b70ff9d2d23d879dcec0660a1e84735a58ee62da"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.687013 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" event={"ID":"6956c039-cf77-429b-8f7f-f93ba195d321","Type":"ContainerStarted","Data":"2e342cce17684d6ff691ec10d2bcc3942a85d6162a78085ad90681a3a3df3576"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.690617 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" event={"ID":"fab52d01-f907-44cb-8d5f-162116d75fc9","Type":"ContainerStarted","Data":"d0a5ae12fe179876187bd5fcc07db538307a9088a6b77eef993ddb984feb4ce6"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.694841 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" event={"ID":"679ffaa0-41b8-4638-8b4c-4c1f424812e4","Type":"ContainerStarted","Data":"a2fef55cbc0542eb24393c676c5f5bd5d0c3e9eac64318f59af7579cb2cbaccb"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.702204 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:03 crc kubenswrapper[4699]: E0226 11:14:03.702658 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:04.20263715 +0000 UTC m=+190.013463584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.730026 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" event={"ID":"5d015dd8-56c9-4f61-b133-4951cda91ca5","Type":"ContainerStarted","Data":"e1b4b4117e357c9cec9f98a9ad6f893e6302ee88ae56e4d7312bba96bb4ecbc3"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.738283 4699 generic.go:334] "Generic (PLEG): container finished" podID="a550b2ea-3ce7-4df3-bbf5-f1025afca8c1" containerID="86924254b70133e8a088814a88e325681fd85745ba692e775d8e52888481afc0" exitCode=0 Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.738406 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" event={"ID":"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1","Type":"ContainerDied","Data":"86924254b70133e8a088814a88e325681fd85745ba692e775d8e52888481afc0"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.746913 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" event={"ID":"1d3e449f-d082-43cb-951d-53d82fde40ca","Type":"ContainerStarted","Data":"f9141d6a8a5238a8078b83999f7ed2b68f3d889e35ec03a155b62c335ec78209"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.747934 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.763807 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tcnxt" event={"ID":"72b1bc55-f48b-4d90-ab02-3a80438096b6","Type":"ContainerStarted","Data":"a48cf703fa085cd2031065303843172d7d70091d4cf97de0a11e40328102d59a"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.768216 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-j6vfb" podStartSLOduration=137.768192835 podStartE2EDuration="2m17.768192835s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:03.720034914 +0000 UTC m=+189.530861358" watchObservedRunningTime="2026-02-26 11:14:03.768192835 +0000 UTC m=+189.579019259" Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.777322 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tcnxt" Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.780139 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.780225 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.785146 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb" event={"ID":"af5429d7-39d0-4b17-8219-21c8491384ae","Type":"ContainerStarted","Data":"2f9adae57e6d2b1af184657dc07068c4c1b867837d37037e20a68bce7fd5be7e"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.788182 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" event={"ID":"e0ecd5cc-b456-4d69-897c-5fd543842440","Type":"ContainerStarted","Data":"82c13a5022914e952a2bab53f2e05496950cc43642fe3a2c7f25c55df6a55a09"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.791684 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tnwpn" event={"ID":"89f840f7-d21f-4028-b53d-ed0e2061ff15","Type":"ContainerStarted","Data":"347daff707932d34a7509e2817d19f9589aa550ee3b39218f581167ad5b4425f"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.791868 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-tnwpn" Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.797442 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hzqgp" event={"ID":"0c7d5fe0-885a-44e4-bacf-19bceeea178f","Type":"ContainerStarted","Data":"4b9ba0a4dd4a72e7950717c813bae7e51efbff5926a279fa5d2cda039b11d068"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.798523 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.802218 4699 patch_prober.go:28] interesting pod/console-operator-58897d9998-hzqgp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.802351 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hzqgp" podUID="0c7d5fe0-885a-44e4-bacf-19bceeea178f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.816203 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.833008 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" event={"ID":"61a1581f-5367-4535-99bc-3f28547ab766","Type":"ContainerStarted","Data":"bd8dd636cc1be15cadc79b3f065ace3f146caed71d3576259cd5542c9b4db330"} Feb 26 11:14:03 crc kubenswrapper[4699]: E0226 11:14:03.838669 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:04.338602904 +0000 UTC m=+190.149429348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.842835 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" event={"ID":"8cd7cbed-d0bf-4d8c-933c-4d031170288a","Type":"ContainerStarted","Data":"5fb3c642cc2b4db8ee02516e04dda3e41ccb434f432279e5c63d1f26897348af"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.859668 4699 generic.go:334] "Generic (PLEG): container finished" podID="afa5e1ce-a457-4771-ab06-2654a7801704" containerID="1594adc44c33ac8ba0a68282a8063be46f209d5dc350a05f6bb0643ca257702a" exitCode=0 Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.859746 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" event={"ID":"afa5e1ce-a457-4771-ab06-2654a7801704","Type":"ContainerDied","Data":"1594adc44c33ac8ba0a68282a8063be46f209d5dc350a05f6bb0643ca257702a"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.865430 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-pw64v" podStartSLOduration=137.865407426 podStartE2EDuration="2m17.865407426s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:03.775998456 +0000 UTC m=+189.586824890" watchObservedRunningTime="2026-02-26 11:14:03.865407426 +0000 UTC m=+189.676233870" Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.908159 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" event={"ID":"460579d9-ed16-49b7-a588-ef20ceb9bbf4","Type":"ContainerStarted","Data":"412810a4b96fc8e6fac38daceab60328782f3b3751aa9492bb91d85380f550fc"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.909529 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" podStartSLOduration=136.909472537 podStartE2EDuration="2m16.909472537s" podCreationTimestamp="2026-02-26 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:03.864103057 +0000 UTC m=+189.674929491" watchObservedRunningTime="2026-02-26 11:14:03.909472537 +0000 UTC m=+189.720299001" Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.914227 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tcnxt" podStartSLOduration=137.914193766 podStartE2EDuration="2m17.914193766s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:03.910710733 +0000 UTC m=+189.721537167" watchObservedRunningTime="2026-02-26 11:14:03.914193766 +0000 UTC m=+189.725020210" Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.921898 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:03 crc kubenswrapper[4699]: E0226 11:14:03.922940 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:04.422917434 +0000 UTC m=+190.233743858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.950045 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" event={"ID":"09191eec-0be2-4c45-9249-6c8081d6108a","Type":"ContainerStarted","Data":"4ec8095ed163cca6baee580c32afe95f34593492d8e83cb223fc13e4f1513d5e"} Feb 26 11:14:03 crc kubenswrapper[4699]: I0226 11:14:03.991295 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xm88w" event={"ID":"4a97e310-1811-48a9-a31a-eb9a0321d280","Type":"ContainerStarted","Data":"7d90a4f67d2dd0c0cf5d5064360d65b4fc14ff112f5931ab2169c9464452b8c9"} Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.026065 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:04 crc kubenswrapper[4699]: E0226 11:14:04.036729 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:04.536706464 +0000 UTC m=+190.347532898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.054752 4699 generic.go:334] "Generic (PLEG): container finished" podID="03fd3407-9529-4638-89d6-cfc6b703e510" containerID="ec8bc8192ce6082446d639b19d1c7574145a66a52226a83d86f89ce7579a3a4e" exitCode=0 Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.058472 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" event={"ID":"03fd3407-9529-4638-89d6-cfc6b703e510","Type":"ContainerDied","Data":"ec8bc8192ce6082446d639b19d1c7574145a66a52226a83d86f89ce7579a3a4e"} Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.060144 4699 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xvgnb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.060225 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" podUID="1a9875bc-9f2e-4887-8dc6-a00cc789eb4a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.060740 4699 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-w7nqx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.060787 4699 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-gsl8w container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.060814 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" podUID="796e9631-3388-48b1-8675-3fbc4b6e435d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.060820 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" podUID="44832f39-2c56-4669-b328-7e663f6cacdf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.060910 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.061198 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.061235 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.063053 4699 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cd5qf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.063093 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" podUID="5cc10041-704b-4b00-8e4e-369103434b64" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.088274 4699 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fq7g8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.088327 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" podUID="744aa737-e6c7-4d6b-ba7d-a9479043ad29" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.088407 4699 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-22qbz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.088422 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" podUID="94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.129309 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:04 crc kubenswrapper[4699]: E0226 11:14:04.130677 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:04.630661538 +0000 UTC m=+190.441487972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.176813 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gngtb" podStartSLOduration=138.17679548 podStartE2EDuration="2m18.17679548s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:03.973482847 +0000 UTC m=+189.784309281" watchObservedRunningTime="2026-02-26 11:14:04.17679548 +0000 UTC m=+189.987621914" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.177558 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tm8w" podStartSLOduration=138.177553652 podStartE2EDuration="2m18.177553652s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.176042547 +0000 UTC m=+189.986868981" watchObservedRunningTime="2026-02-26 11:14:04.177553652 +0000 UTC m=+189.988380086" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.235168 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:04 crc kubenswrapper[4699]: E0226 11:14:04.235512 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:04.735500083 +0000 UTC m=+190.546326517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.331287 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-tnwpn" podStartSLOduration=9.33126705 podStartE2EDuration="9.33126705s" podCreationTimestamp="2026-02-26 11:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.231902677 +0000 UTC m=+190.042729121" watchObservedRunningTime="2026-02-26 11:14:04.33126705 +0000 UTC m=+190.142093484" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.336527 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-hzqgp" podStartSLOduration=138.336504125 podStartE2EDuration="2m18.336504125s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.333832956 +0000 UTC m=+190.144659380" watchObservedRunningTime="2026-02-26 11:14:04.336504125 +0000 UTC m=+190.147330569" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.337271 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:04 crc kubenswrapper[4699]: E0226 11:14:04.337432 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:04.837411182 +0000 UTC m=+190.648237616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.337500 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:04 crc kubenswrapper[4699]: E0226 11:14:04.337902 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:04.837892886 +0000 UTC m=+190.648719320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.424025 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmxjr" podStartSLOduration=138.424002508 podStartE2EDuration="2m18.424002508s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.421258967 +0000 UTC m=+190.232085421" watchObservedRunningTime="2026-02-26 11:14:04.424002508 +0000 UTC m=+190.234828952" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.438943 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:04 crc kubenswrapper[4699]: E0226 11:14:04.439352 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:04.939333941 +0000 UTC m=+190.750160375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.521452 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k6wtb" podStartSLOduration=138.521424985 podStartE2EDuration="2m18.521424985s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.520976842 +0000 UTC m=+190.331803276" watchObservedRunningTime="2026-02-26 11:14:04.521424985 +0000 UTC m=+190.332251419" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.541442 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:04 crc kubenswrapper[4699]: E0226 11:14:04.541909 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:05.041891719 +0000 UTC m=+190.852718153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.574444 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xbpcs" podStartSLOduration=138.57442396 podStartE2EDuration="2m18.57442396s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.572186013 +0000 UTC m=+190.383012457" watchObservedRunningTime="2026-02-26 11:14:04.57442396 +0000 UTC m=+190.385250384" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.622496 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-hnsh7" podStartSLOduration=138.622476128 podStartE2EDuration="2m18.622476128s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.619062697 +0000 UTC m=+190.429889151" watchObservedRunningTime="2026-02-26 11:14:04.622476128 +0000 UTC m=+190.433302562" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.645211 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:04 crc kubenswrapper[4699]: E0226 11:14:04.645578 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:05.14556308 +0000 UTC m=+190.956389504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.647866 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" podStartSLOduration=138.647853657 podStartE2EDuration="2m18.647853657s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.645835568 +0000 UTC m=+190.456662002" watchObservedRunningTime="2026-02-26 11:14:04.647853657 +0000 UTC m=+190.458680092" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.724472 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.726575 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.726622 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.747221 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:04 crc kubenswrapper[4699]: E0226 11:14:04.747629 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:05.247614773 +0000 UTC m=+191.058441207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.842949 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xm88w" podStartSLOduration=138.842927147 podStartE2EDuration="2m18.842927147s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.77663874 +0000 UTC m=+190.587465184" watchObservedRunningTime="2026-02-26 11:14:04.842927147 +0000 UTC m=+190.653753581" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.844439 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" podStartSLOduration=137.844432531 podStartE2EDuration="2m17.844432531s" podCreationTimestamp="2026-02-26 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.842307279 +0000 UTC m=+190.653133733" watchObservedRunningTime="2026-02-26 11:14:04.844432531 +0000 UTC m=+190.655258965" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.848995 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:04 crc kubenswrapper[4699]: E0226 11:14:04.849491 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:05.34947232 +0000 UTC m=+191.160298754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.884596 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" podStartSLOduration=138.884575347 podStartE2EDuration="2m18.884575347s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.87723969 +0000 UTC m=+190.688066134" watchObservedRunningTime="2026-02-26 11:14:04.884575347 +0000 UTC m=+190.695401781" Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.954364 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:04 crc kubenswrapper[4699]: E0226 11:14:04.954745 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:05.454729358 +0000 UTC m=+191.265555792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:04 crc kubenswrapper[4699]: I0226 11:14:04.963571 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-k9bv4" podStartSLOduration=138.963549248 podStartE2EDuration="2m18.963549248s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.921736984 +0000 UTC m=+190.732563418" watchObservedRunningTime="2026-02-26 11:14:04.963549248 +0000 UTC m=+190.774375692" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.055842 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:05 crc kubenswrapper[4699]: E0226 11:14:05.056174 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:05.556151813 +0000 UTC m=+191.366978247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.159289 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.159350 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" event={"ID":"09191eec-0be2-4c45-9249-6c8081d6108a","Type":"ContainerStarted","Data":"b4e29a36046b28c5156e1d2d7970fd894db79a0c6417201cc321957230906e44"} Feb 26 11:14:05 crc kubenswrapper[4699]: E0226 11:14:05.159744 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:05.659729001 +0000 UTC m=+191.470555435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.194665 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" event={"ID":"03fd3407-9529-4638-89d6-cfc6b703e510","Type":"ContainerStarted","Data":"3fa0d5be17dd80720d17b8f823f5ed502ea5130df1ef1d9a2520602f0d57c64d"} Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.195302 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.201219 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" event={"ID":"679ffaa0-41b8-4638-8b4c-4c1f424812e4","Type":"ContainerStarted","Data":"de6932c7dc258437fbe25d836eeab2dd76c193476998fff02f811b5f0b5fde19"} Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.214147 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" event={"ID":"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1","Type":"ContainerStarted","Data":"dc90312239394159bc93a17b5e295cfe48f5d30d81ba0217af32834a3153cb80"} Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.228999 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gpjvh" podStartSLOduration=139.228976885 podStartE2EDuration="2m19.228976885s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:04.983722074 +0000 UTC m=+190.794548528" watchObservedRunningTime="2026-02-26 11:14:05.228976885 +0000 UTC m=+191.039803339" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.230334 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wcnnr" podStartSLOduration=139.230326825 podStartE2EDuration="2m19.230326825s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:05.228557053 +0000 UTC m=+191.039383497" watchObservedRunningTime="2026-02-26 11:14:05.230326825 +0000 UTC m=+191.041153259" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.265923 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:05 crc kubenswrapper[4699]: E0226 11:14:05.266206 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:05.766188304 +0000 UTC m=+191.577014738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.266548 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:05 crc kubenswrapper[4699]: E0226 11:14:05.270347 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:05.770330366 +0000 UTC m=+191.581156800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.283195 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" event={"ID":"b4f243e8-e08c-420e-a78b-02e6a14bf5fe","Type":"ContainerStarted","Data":"569bd3906af910300f4d7496d6bc8901382152ed8d3a57c679d317d53c830bef"} Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.287319 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p742p" podStartSLOduration=139.287300027 podStartE2EDuration="2m19.287300027s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:05.26977783 +0000 UTC m=+191.080604264" watchObservedRunningTime="2026-02-26 11:14:05.287300027 +0000 UTC m=+191.098126461" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.340716 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v5ctv" event={"ID":"6956c039-cf77-429b-8f7f-f93ba195d321","Type":"ContainerStarted","Data":"f7d17fd3b95c04cd27aba31e4f058d08f123e7867d573100242c1cdf0b285359"} Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.342007 4699 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fq7g8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.342059 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" podUID="744aa737-e6c7-4d6b-ba7d-a9479043ad29" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.343694 4699 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cd5qf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.343733 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" podUID="5cc10041-704b-4b00-8e4e-369103434b64" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.344279 4699 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-22qbz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.344307 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" podUID="94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.345964 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.346020 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.349585 4699 patch_prober.go:28] interesting pod/console-operator-58897d9998-hzqgp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.349628 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hzqgp" podUID="0c7d5fe0-885a-44e4-bacf-19bceeea178f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.369578 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:05 crc kubenswrapper[4699]: E0226 11:14:05.370799 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:05.870775271 +0000 UTC m=+191.681601715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.445233 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" podStartSLOduration=139.445208828 podStartE2EDuration="2m19.445208828s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:05.443633342 +0000 UTC m=+191.254459776" watchObservedRunningTime="2026-02-26 11:14:05.445208828 +0000 UTC m=+191.256035262" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.484046 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:05 crc kubenswrapper[4699]: E0226 11:14:05.493079 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:05.993057331 +0000 UTC m=+191.803883835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.494034 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-v5ctv" podStartSLOduration=139.494013909 podStartE2EDuration="2m19.494013909s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:05.493549916 +0000 UTC m=+191.304376360" watchObservedRunningTime="2026-02-26 11:14:05.494013909 +0000 UTC m=+191.304840353" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.576227 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxsr8" podStartSLOduration=139.576207046 podStartE2EDuration="2m19.576207046s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:05.542615504 +0000 UTC m=+191.353441948" watchObservedRunningTime="2026-02-26 11:14:05.576207046 +0000 UTC m=+191.387033480" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.598550 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:05 crc kubenswrapper[4699]: E0226 11:14:05.608261 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:06.107928473 +0000 UTC m=+191.918754907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.705897 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:05 crc kubenswrapper[4699]: E0226 11:14:05.706312 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:06.206299617 +0000 UTC m=+192.017126051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.729329 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:05 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:05 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:05 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.729400 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.807716 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:05 crc kubenswrapper[4699]: E0226 11:14:05.808252 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:06.308233557 +0000 UTC m=+192.119059991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:05 crc kubenswrapper[4699]: I0226 11:14:05.909836 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:05 crc kubenswrapper[4699]: E0226 11:14:05.910375 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:06.410356662 +0000 UTC m=+192.221183096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.011506 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.011714 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:06.511682353 +0000 UTC m=+192.322508797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.011884 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.012315 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:06.512300582 +0000 UTC m=+192.323127016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.113155 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.113322 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:06.613304534 +0000 UTC m=+192.424130968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.113470 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.113834 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:06.613824509 +0000 UTC m=+192.424650943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.214051 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.214500 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:06.714483541 +0000 UTC m=+192.525309975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.320463 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.320841 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:06.820826421 +0000 UTC m=+192.631652855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.321871 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" podStartSLOduration=139.321838091 podStartE2EDuration="2m19.321838091s" podCreationTimestamp="2026-02-26 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:05.579036559 +0000 UTC m=+191.389862993" watchObservedRunningTime="2026-02-26 11:14:06.321838091 +0000 UTC m=+192.132664525" Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.401074 4699 generic.go:334] "Generic (PLEG): container finished" podID="5f8a28b8-c47b-4288-877f-8e90a3b581b5" containerID="61a2c48ee6bf74ea4766fbbb38a98752e4fc1a270493117d88d14b6af7b2c988" exitCode=0 Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.401154 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" event={"ID":"5f8a28b8-c47b-4288-877f-8e90a3b581b5","Type":"ContainerDied","Data":"61a2c48ee6bf74ea4766fbbb38a98752e4fc1a270493117d88d14b6af7b2c988"} Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.421976 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.422407 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:06.92238958 +0000 UTC m=+192.733216014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.428097 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qzphl" event={"ID":"79a9064f-5fcf-42f7-af6f-71aeeb75560e","Type":"ContainerStarted","Data":"ee510303e64372f08a02ae5a20a53e3bab98135ee01fe168a220c71c4a9c91b8"} Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.434566 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" event={"ID":"a550b2ea-3ce7-4df3-bbf5-f1025afca8c1","Type":"ContainerStarted","Data":"5a55c50b4b793697906fc96de58dc7541e4deae7df08fd6347550897da648c81"} Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.439669 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" event={"ID":"afa5e1ce-a457-4771-ab06-2654a7801704","Type":"ContainerStarted","Data":"0a0d420653cca8d8cbd7e0ac3b8114b6062c04c58afe51e78fb7e6d70799880b"} Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.445650 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.445672 4699 patch_prober.go:28] interesting pod/console-operator-58897d9998-hzqgp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.445712 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.445787 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hzqgp" podUID="0c7d5fe0-885a-44e4-bacf-19bceeea178f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.526943 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.529486 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:07.029464491 +0000 UTC m=+192.840290955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.628399 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.628655 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:07.128620359 +0000 UTC m=+192.939446793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.629293 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.630941 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:07.130919817 +0000 UTC m=+192.941746321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.725852 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:06 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:06 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:06 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.725916 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.731801 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.732180 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:07.232150696 +0000 UTC m=+193.042977130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.835635 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.836022 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:07.336008752 +0000 UTC m=+193.146835186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.852655 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" podStartSLOduration=140.852632233 podStartE2EDuration="2m20.852632233s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:06.84948197 +0000 UTC m=+192.660308414" watchObservedRunningTime="2026-02-26 11:14:06.852632233 +0000 UTC m=+192.663458667" Feb 26 11:14:06 crc kubenswrapper[4699]: I0226 11:14:06.937290 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:06 crc kubenswrapper[4699]: E0226 11:14:06.937688 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:07.437671754 +0000 UTC m=+193.248498188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.038480 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:07 crc kubenswrapper[4699]: E0226 11:14:07.038891 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:07.538867861 +0000 UTC m=+193.349694345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.141722 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:07 crc kubenswrapper[4699]: E0226 11:14:07.142497 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:07.64247151 +0000 UTC m=+193.453297954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.181733 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mzgjj"] Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.182783 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.196344 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.243962 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:07 crc kubenswrapper[4699]: E0226 11:14:07.244390 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:07.744372899 +0000 UTC m=+193.555199323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.319551 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzgjj"] Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.345633 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.346054 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z6wd\" (UniqueName: \"kubernetes.io/projected/71a83978-4f86-404b-967a-0e7493ff6721-kube-api-access-9z6wd\") pod \"certified-operators-mzgjj\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.346167 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-utilities\") pod \"certified-operators-mzgjj\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.346221 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-catalog-content\") pod \"certified-operators-mzgjj\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:14:07 crc kubenswrapper[4699]: E0226 11:14:07.346406 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:07.846386421 +0000 UTC m=+193.657212855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.351432 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-czwkc"] Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.352840 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.360545 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.449424 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqrqs\" (UniqueName: \"kubernetes.io/projected/ac0026c3-1fad-4b34-9c42-389971f0c773-kube-api-access-rqrqs\") pod \"community-operators-czwkc\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.449537 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z6wd\" (UniqueName: \"kubernetes.io/projected/71a83978-4f86-404b-967a-0e7493ff6721-kube-api-access-9z6wd\") pod \"certified-operators-mzgjj\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.449586 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-utilities\") pod \"community-operators-czwkc\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.449629 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-utilities\") pod \"certified-operators-mzgjj\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.449658 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-catalog-content\") pod \"community-operators-czwkc\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.449683 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-catalog-content\") pod \"certified-operators-mzgjj\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.449715 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:07 crc kubenswrapper[4699]: E0226 11:14:07.450100 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:07.950084772 +0000 UTC m=+193.760911206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.450979 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-utilities\") pod \"certified-operators-mzgjj\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.451347 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-catalog-content\") pod \"certified-operators-mzgjj\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.516008 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z6wd\" (UniqueName: \"kubernetes.io/projected/71a83978-4f86-404b-967a-0e7493ff6721-kube-api-access-9z6wd\") pod \"certified-operators-mzgjj\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.520656 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-czwkc"] Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.524457 4699 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-vzj5b container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.524540 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" podUID="03fd3407-9529-4638-89d6-cfc6b703e510" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.539329 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-phhbz"] Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.540541 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.551151 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.551519 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-catalog-content\") pod \"community-operators-czwkc\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.551592 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqrqs\" (UniqueName: \"kubernetes.io/projected/ac0026c3-1fad-4b34-9c42-389971f0c773-kube-api-access-rqrqs\") pod \"community-operators-czwkc\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.551750 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-utilities\") pod \"community-operators-czwkc\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:14:07 crc kubenswrapper[4699]: E0226 11:14:07.552175 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:08.052155976 +0000 UTC m=+193.862982410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.552697 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-catalog-content\") pod \"community-operators-czwkc\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.553376 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-utilities\") pod \"community-operators-czwkc\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.574375 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.596277 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqrqs\" (UniqueName: \"kubernetes.io/projected/ac0026c3-1fad-4b34-9c42-389971f0c773-kube-api-access-rqrqs\") pod \"community-operators-czwkc\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.616522 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phhbz"] Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.660967 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-utilities\") pod \"certified-operators-phhbz\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.661021 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-699tw\" (UniqueName: \"kubernetes.io/projected/9ea10063-7888-400e-af1c-216cbde5a13e-kube-api-access-699tw\") pod \"certified-operators-phhbz\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.661062 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-catalog-content\") pod \"certified-operators-phhbz\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.661094 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:07 crc kubenswrapper[4699]: E0226 11:14:07.663765 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:08.163743501 +0000 UTC m=+193.974570035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.680463 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.731830 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:07 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:07 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:07 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.731895 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.770611 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fhgnz"] Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.770814 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.771035 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-utilities\") pod \"certified-operators-phhbz\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.771075 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-699tw\" (UniqueName: \"kubernetes.io/projected/9ea10063-7888-400e-af1c-216cbde5a13e-kube-api-access-699tw\") pod \"certified-operators-phhbz\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.771103 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-catalog-content\") pod \"certified-operators-phhbz\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.771671 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-catalog-content\") pod \"certified-operators-phhbz\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:14:07 crc kubenswrapper[4699]: E0226 11:14:07.771754 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:08.271739429 +0000 UTC m=+194.082565863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.771888 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.772019 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-utilities\") pod \"certified-operators-phhbz\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.814335 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhgnz"] Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.860866 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-699tw\" (UniqueName: \"kubernetes.io/projected/9ea10063-7888-400e-af1c-216cbde5a13e-kube-api-access-699tw\") pod \"certified-operators-phhbz\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.872015 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.872095 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-catalog-content\") pod \"community-operators-fhgnz\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.872166 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-utilities\") pod \"community-operators-fhgnz\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.872194 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hnhh\" (UniqueName: \"kubernetes.io/projected/1389c8c4-9546-4193-8067-50db90448d4f-kube-api-access-8hnhh\") pod \"community-operators-fhgnz\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:14:07 crc kubenswrapper[4699]: E0226 11:14:07.872586 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:08.372572626 +0000 UTC m=+194.183399060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.921152 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.922005 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.925001 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.936806 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.937279 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.964172 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.973232 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:07 crc kubenswrapper[4699]: E0226 11:14:07.984606 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:08.484572653 +0000 UTC m=+194.295399087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:07 crc kubenswrapper[4699]: I0226 11:14:07.995407 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60080: no serving certificate available for the kubelet" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.007187 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.007328 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-catalog-content\") pod \"community-operators-fhgnz\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:14:08 crc kubenswrapper[4699]: E0226 11:14:08.010231 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:08.51020856 +0000 UTC m=+194.321034994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.011353 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-catalog-content\") pod \"community-operators-fhgnz\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.025532 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-utilities\") pod \"community-operators-fhgnz\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.025605 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hnhh\" (UniqueName: \"kubernetes.io/projected/1389c8c4-9546-4193-8067-50db90448d4f-kube-api-access-8hnhh\") pod \"community-operators-fhgnz\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.026461 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-utilities\") pod \"community-operators-fhgnz\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.077098 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hnhh\" (UniqueName: \"kubernetes.io/projected/1389c8c4-9546-4193-8067-50db90448d4f-kube-api-access-8hnhh\") pod \"community-operators-fhgnz\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.088773 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60088: no serving certificate available for the kubelet" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.115042 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.130931 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.131281 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4168d2c5-00be-4270-9a2b-c2b8847e4593-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4168d2c5-00be-4270-9a2b-c2b8847e4593\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.131312 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4168d2c5-00be-4270-9a2b-c2b8847e4593-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4168d2c5-00be-4270-9a2b-c2b8847e4593\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:14:08 crc kubenswrapper[4699]: E0226 11:14:08.131507 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:08.631488331 +0000 UTC m=+194.442314775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.225356 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60090: no serving certificate available for the kubelet" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.243974 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.244106 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4168d2c5-00be-4270-9a2b-c2b8847e4593-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4168d2c5-00be-4270-9a2b-c2b8847e4593\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.244147 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4168d2c5-00be-4270-9a2b-c2b8847e4593-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4168d2c5-00be-4270-9a2b-c2b8847e4593\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.244223 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4168d2c5-00be-4270-9a2b-c2b8847e4593-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4168d2c5-00be-4270-9a2b-c2b8847e4593\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:14:08 crc kubenswrapper[4699]: E0226 11:14:08.244525 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:08.744510058 +0000 UTC m=+194.555336492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.311897 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4168d2c5-00be-4270-9a2b-c2b8847e4593-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4168d2c5-00be-4270-9a2b-c2b8847e4593\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.346774 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:08 crc kubenswrapper[4699]: E0226 11:14:08.347191 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:08.847168749 +0000 UTC m=+194.657995183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.351735 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60100: no serving certificate available for the kubelet" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.395769 4699 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-vzj5b container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.395819 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" podUID="03fd3407-9529-4638-89d6-cfc6b703e510" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.396181 4699 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-vzj5b container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.396202 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" podUID="03fd3407-9529-4638-89d6-cfc6b703e510" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.459015 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:08 crc kubenswrapper[4699]: E0226 11:14:08.459350 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:08.95933886 +0000 UTC m=+194.770165294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.550979 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qzphl" event={"ID":"79a9064f-5fcf-42f7-af6f-71aeeb75560e","Type":"ContainerStarted","Data":"39372f8d86322e71df6515e46017a2da0585f6f0616d2e74f54472479d1581af"} Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.559946 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.561240 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:08 crc kubenswrapper[4699]: E0226 11:14:08.561669 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:09.061653391 +0000 UTC m=+194.872479815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.663019 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:08 crc kubenswrapper[4699]: E0226 11:14:08.663649 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:09.163636532 +0000 UTC m=+194.974462966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.751350 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:08 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:08 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:08 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.751407 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.763668 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.770165 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:08 crc kubenswrapper[4699]: E0226 11:14:08.770278 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:09.2702584 +0000 UTC m=+195.081084834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.770570 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:08 crc kubenswrapper[4699]: E0226 11:14:08.770894 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:09.270883599 +0000 UTC m=+195.081710033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.781823 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60104: no serving certificate available for the kubelet" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.882789 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnjhv\" (UniqueName: \"kubernetes.io/projected/5f8a28b8-c47b-4288-877f-8e90a3b581b5-kube-api-access-bnjhv\") pod \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.882871 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f8a28b8-c47b-4288-877f-8e90a3b581b5-secret-volume\") pod \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.882904 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f8a28b8-c47b-4288-877f-8e90a3b581b5-config-volume\") pod \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\" (UID: \"5f8a28b8-c47b-4288-877f-8e90a3b581b5\") " Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.883063 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:08 crc kubenswrapper[4699]: E0226 11:14:08.883465 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:09.383448672 +0000 UTC m=+195.194275106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.884276 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8a28b8-c47b-4288-877f-8e90a3b581b5-config-volume" (OuterVolumeSpecName: "config-volume") pod "5f8a28b8-c47b-4288-877f-8e90a3b581b5" (UID: "5f8a28b8-c47b-4288-877f-8e90a3b581b5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.886829 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzgjj"] Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.894690 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8a28b8-c47b-4288-877f-8e90a3b581b5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5f8a28b8-c47b-4288-877f-8e90a3b581b5" (UID: "5f8a28b8-c47b-4288-877f-8e90a3b581b5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.895495 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8a28b8-c47b-4288-877f-8e90a3b581b5-kube-api-access-bnjhv" (OuterVolumeSpecName: "kube-api-access-bnjhv") pod "5f8a28b8-c47b-4288-877f-8e90a3b581b5" (UID: "5f8a28b8-c47b-4288-877f-8e90a3b581b5"). InnerVolumeSpecName "kube-api-access-bnjhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.914985 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-czs8l" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.952367 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7nqx" Feb 26 11:14:08 crc kubenswrapper[4699]: E0226 11:14:08.989701 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:09.489680218 +0000 UTC m=+195.300506652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.995951 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.996144 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnjhv\" (UniqueName: \"kubernetes.io/projected/5f8a28b8-c47b-4288-877f-8e90a3b581b5-kube-api-access-bnjhv\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.996159 4699 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f8a28b8-c47b-4288-877f-8e90a3b581b5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:08 crc kubenswrapper[4699]: I0226 11:14:08.996170 4699 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f8a28b8-c47b-4288-877f-8e90a3b581b5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:09 crc kubenswrapper[4699]: W0226 11:14:09.037272 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71a83978_4f86_404b_967a_0e7493ff6721.slice/crio-1c165f4cac2c47ef0e2f5ea976276ea6634d20dbf88d2b070f23064d87eecce4 WatchSource:0}: Error finding container 1c165f4cac2c47ef0e2f5ea976276ea6634d20dbf88d2b070f23064d87eecce4: Status 404 returned error can't find the container with id 1c165f4cac2c47ef0e2f5ea976276ea6634d20dbf88d2b070f23064d87eecce4 Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.055713 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.055782 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.055716 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.056042 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.062239 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-czwkc"] Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.100180 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:09 crc kubenswrapper[4699]: E0226 11:14:09.100652 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:09.600633534 +0000 UTC m=+195.411459968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.122985 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60118: no serving certificate available for the kubelet" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.132884 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.165129 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xvgnb" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.191954 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.199522 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.207184 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:09 crc kubenswrapper[4699]: E0226 11:14:09.209327 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:09.709310292 +0000 UTC m=+195.520136726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.232632 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 11:14:09 crc kubenswrapper[4699]: E0226 11:14:09.240373 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8a28b8-c47b-4288-877f-8e90a3b581b5" containerName="collect-profiles" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.240433 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8a28b8-c47b-4288-877f-8e90a3b581b5" containerName="collect-profiles" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.240731 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8a28b8-c47b-4288-877f-8e90a3b581b5" containerName="collect-profiles" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.241320 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.285749 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.285839 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.287889 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.287919 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.302467 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.326296 4699 patch_prober.go:28] interesting pod/console-f9d7485db-hnsh7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.326364 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hnsh7" podUID="e6bdcf19-db76-497c-a2fe-a6de38fae724" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.328520 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.329069 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.329104 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:14:09 crc kubenswrapper[4699]: E0226 11:14:09.329239 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:09.829220773 +0000 UTC m=+195.640047207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.372512 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s8kpz"] Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.379881 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.407870 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8kpz"] Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.419904 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.420472 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.420500 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.430389 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr725\" (UniqueName: \"kubernetes.io/projected/8c96a703-e568-4916-8035-a951ae91dc2b-kube-api-access-rr725\") pod \"redhat-marketplace-s8kpz\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.430498 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.430557 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.430605 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-catalog-content\") pod \"redhat-marketplace-s8kpz\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.430697 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-utilities\") pod \"redhat-marketplace-s8kpz\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.430738 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.432956 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:14:09 crc kubenswrapper[4699]: E0226 11:14:09.433678 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:09.933660836 +0000 UTC m=+195.744487260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.436025 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.436544 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.470017 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60134: no serving certificate available for the kubelet" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.485014 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.489922 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.524005 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.535334 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.535517 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-utilities\") pod \"redhat-marketplace-s8kpz\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.535709 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr725\" (UniqueName: \"kubernetes.io/projected/8c96a703-e568-4916-8035-a951ae91dc2b-kube-api-access-rr725\") pod \"redhat-marketplace-s8kpz\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.535803 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-catalog-content\") pod \"redhat-marketplace-s8kpz\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:14:09 crc kubenswrapper[4699]: E0226 11:14:09.536702 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:10.036675128 +0000 UTC m=+195.847501572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.537190 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-utilities\") pod \"redhat-marketplace-s8kpz\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.540549 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-catalog-content\") pod \"redhat-marketplace-s8kpz\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.607674 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr725\" (UniqueName: \"kubernetes.io/projected/8c96a703-e568-4916-8035-a951ae91dc2b-kube-api-access-rr725\") pod \"redhat-marketplace-s8kpz\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.607824 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czwkc" event={"ID":"ac0026c3-1fad-4b34-9c42-389971f0c773","Type":"ContainerStarted","Data":"31376761fbf12a5b81018d6bde894ab4db92607e39e297d6342dce3d31049346"} Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.632015 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" event={"ID":"5f8a28b8-c47b-4288-877f-8e90a3b581b5","Type":"ContainerDied","Data":"9cc8202a0a693b54f9a7afa4f72146520cc57d28a34110bea4d4992553af18b6"} Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.632064 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cc8202a0a693b54f9a7afa4f72146520cc57d28a34110bea4d4992553af18b6" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.632205 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.640322 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:09 crc kubenswrapper[4699]: E0226 11:14:09.640958 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:10.140943566 +0000 UTC m=+195.951770000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.654357 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzgjj" event={"ID":"71a83978-4f86-404b-967a-0e7493ff6721","Type":"ContainerStarted","Data":"1c165f4cac2c47ef0e2f5ea976276ea6634d20dbf88d2b070f23064d87eecce4"} Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.674328 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tm98c" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.674687 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.719442 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.736885 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:09 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:09 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:09 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.737288 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.747518 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:09 crc kubenswrapper[4699]: E0226 11:14:09.749163 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:10.249140761 +0000 UTC m=+196.059967195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.781075 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hrk4n"] Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.782618 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.789270 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhgnz"] Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.805796 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gsl8w"] Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.806019 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" podUID="796e9631-3388-48b1-8675-3fbc4b6e435d" containerName="controller-manager" containerID="cri-o://4e6b4035a7e79b8d64117aea9e3cf6e2de88e935585f4e47f14b7523b0476a41" gracePeriod=30 Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.818751 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrk4n"] Feb 26 11:14:09 crc kubenswrapper[4699]: W0226 11:14:09.820105 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1389c8c4_9546_4193_8067_50db90448d4f.slice/crio-f819282439bed9f63847862724adeef5b7b347bc240dc1336ee32c15da7bf7cc WatchSource:0}: Error finding container f819282439bed9f63847862724adeef5b7b347bc240dc1336ee32c15da7bf7cc: Status 404 returned error can't find the container with id f819282439bed9f63847862724adeef5b7b347bc240dc1336ee32c15da7bf7cc Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.840485 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-hzqgp" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.855596 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xhdp\" (UniqueName: \"kubernetes.io/projected/6e7ddf51-5522-4085-8567-76c9a254ed15-kube-api-access-7xhdp\") pod \"redhat-marketplace-hrk4n\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.855706 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.855750 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-utilities\") pod \"redhat-marketplace-hrk4n\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.855778 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-catalog-content\") pod \"redhat-marketplace-hrk4n\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:14:09 crc kubenswrapper[4699]: E0226 11:14:09.857180 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:10.35716463 +0000 UTC m=+196.167991064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.877469 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.905360 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phhbz"] Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.960036 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.960429 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xhdp\" (UniqueName: \"kubernetes.io/projected/6e7ddf51-5522-4085-8567-76c9a254ed15-kube-api-access-7xhdp\") pod \"redhat-marketplace-hrk4n\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.960569 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-utilities\") pod \"redhat-marketplace-hrk4n\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.960612 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-catalog-content\") pod \"redhat-marketplace-hrk4n\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.961309 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-catalog-content\") pod \"redhat-marketplace-hrk4n\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:14:09 crc kubenswrapper[4699]: E0226 11:14:09.964240 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:10.46421198 +0000 UTC m=+196.275038414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.965340 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-utilities\") pod \"redhat-marketplace-hrk4n\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:14:09 crc kubenswrapper[4699]: I0226 11:14:09.986611 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60138: no serving certificate available for the kubelet" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.025995 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.056452 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xhdp\" (UniqueName: \"kubernetes.io/projected/6e7ddf51-5522-4085-8567-76c9a254ed15-kube-api-access-7xhdp\") pod \"redhat-marketplace-hrk4n\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.057001 4699 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.063965 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8"] Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.064170 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" podUID="744aa737-e6c7-4d6b-ba7d-a9479043ad29" containerName="route-controller-manager" containerID="cri-o://e82195f6e02bee889f35431d957fa456c0b1db807d039ddcd99b290da7bc9288" gracePeriod=30 Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.065430 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:10 crc kubenswrapper[4699]: E0226 11:14:10.069294 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:10.569270612 +0000 UTC m=+196.380097046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:10 crc kubenswrapper[4699]: W0226 11:14:10.137498 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ea10063_7888_400e_af1c_216cbde5a13e.slice/crio-c8dc58ce346d0f6b6aad0363b33f0cf4112745523923e0e7d1cf3d865b90372a WatchSource:0}: Error finding container c8dc58ce346d0f6b6aad0363b33f0cf4112745523923e0e7d1cf3d865b90372a: Status 404 returned error can't find the container with id c8dc58ce346d0f6b6aad0363b33f0cf4112745523923e0e7d1cf3d865b90372a Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.169102 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:10 crc kubenswrapper[4699]: E0226 11:14:10.169912 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:10.669889813 +0000 UTC m=+196.480716247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.221777 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.272287 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:10 crc kubenswrapper[4699]: E0226 11:14:10.272658 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:10.772643817 +0000 UTC m=+196.583470251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.332939 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sc9c6"] Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.336011 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.343926 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.374033 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.374505 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-catalog-content\") pod \"redhat-operators-sc9c6\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.374662 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tqhd\" (UniqueName: \"kubernetes.io/projected/44d171ad-7d92-4c70-a686-65f60ded8a03-kube-api-access-2tqhd\") pod \"redhat-operators-sc9c6\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.374703 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-utilities\") pod \"redhat-operators-sc9c6\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:14:10 crc kubenswrapper[4699]: E0226 11:14:10.374770 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:10.874749641 +0000 UTC m=+196.685576075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.438307 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sc9c6"] Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.458051 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.481644 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tqhd\" (UniqueName: \"kubernetes.io/projected/44d171ad-7d92-4c70-a686-65f60ded8a03-kube-api-access-2tqhd\") pod \"redhat-operators-sc9c6\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.481757 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-utilities\") pod \"redhat-operators-sc9c6\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.481803 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-catalog-content\") pod \"redhat-operators-sc9c6\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.481997 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:10 crc kubenswrapper[4699]: E0226 11:14:10.482514 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:10.982501332 +0000 UTC m=+196.793327766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.482679 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-utilities\") pod \"redhat-operators-sc9c6\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.483005 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-catalog-content\") pod \"redhat-operators-sc9c6\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:14:10 crc kubenswrapper[4699]: W0226 11:14:10.518985 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7b6f5fd6_ce1d_48d1_bb78_237e07a93ff7.slice/crio-c2ec8f209323100d0e1a79f63a539f1e19839405896844b0751e4c3bb600c54d WatchSource:0}: Error finding container c2ec8f209323100d0e1a79f63a539f1e19839405896844b0751e4c3bb600c54d: Status 404 returned error can't find the container with id c2ec8f209323100d0e1a79f63a539f1e19839405896844b0751e4c3bb600c54d Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.537527 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tqhd\" (UniqueName: \"kubernetes.io/projected/44d171ad-7d92-4c70-a686-65f60ded8a03-kube-api-access-2tqhd\") pod \"redhat-operators-sc9c6\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.585867 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:10 crc kubenswrapper[4699]: E0226 11:14:10.586881 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:11.086861213 +0000 UTC m=+196.897687647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.682610 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60142: no serving certificate available for the kubelet" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.689705 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:10 crc kubenswrapper[4699]: E0226 11:14:10.690085 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:11.190069601 +0000 UTC m=+197.000896035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.713623 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7","Type":"ContainerStarted","Data":"c2ec8f209323100d0e1a79f63a539f1e19839405896844b0751e4c3bb600c54d"} Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.715516 4699 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-26T11:14:10.057024491Z","Handler":null,"Name":""} Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.721387 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.743811 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jhgks"] Feb 26 11:14:10 crc kubenswrapper[4699]: E0226 11:14:10.744061 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796e9631-3388-48b1-8675-3fbc4b6e435d" containerName="controller-manager" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.744133 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="796e9631-3388-48b1-8675-3fbc4b6e435d" containerName="controller-manager" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.744342 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="796e9631-3388-48b1-8675-3fbc4b6e435d" containerName="controller-manager" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.745261 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.755236 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:10 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:10 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:10 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.755331 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.766606 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.790989 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-client-ca\") pod \"796e9631-3388-48b1-8675-3fbc4b6e435d\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.791324 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.791443 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-config\") pod \"796e9631-3388-48b1-8675-3fbc4b6e435d\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.791507 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-proxy-ca-bundles\") pod \"796e9631-3388-48b1-8675-3fbc4b6e435d\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.791617 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/796e9631-3388-48b1-8675-3fbc4b6e435d-serving-cert\") pod \"796e9631-3388-48b1-8675-3fbc4b6e435d\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.791652 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjjgq\" (UniqueName: \"kubernetes.io/projected/796e9631-3388-48b1-8675-3fbc4b6e435d-kube-api-access-vjjgq\") pod \"796e9631-3388-48b1-8675-3fbc4b6e435d\" (UID: \"796e9631-3388-48b1-8675-3fbc4b6e435d\") " Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.791966 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-catalog-content\") pod \"redhat-operators-jhgks\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.792012 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-utilities\") pod \"redhat-operators-jhgks\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.792043 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jnw\" (UniqueName: \"kubernetes.io/projected/6b9da973-6b5f-4485-adca-8792b0a3d256-kube-api-access-44jnw\") pod \"redhat-operators-jhgks\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.793453 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-client-ca" (OuterVolumeSpecName: "client-ca") pod "796e9631-3388-48b1-8675-3fbc4b6e435d" (UID: "796e9631-3388-48b1-8675-3fbc4b6e435d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:10 crc kubenswrapper[4699]: E0226 11:14:10.793545 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:14:11.293526565 +0000 UTC m=+197.104352999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.794047 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-config" (OuterVolumeSpecName: "config") pod "796e9631-3388-48b1-8675-3fbc4b6e435d" (UID: "796e9631-3388-48b1-8675-3fbc4b6e435d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.797451 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "796e9631-3388-48b1-8675-3fbc4b6e435d" (UID: "796e9631-3388-48b1-8675-3fbc4b6e435d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.799301 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jhgks"] Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.893742 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.894793 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-catalog-content\") pod \"redhat-operators-jhgks\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.894924 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-utilities\") pod \"redhat-operators-jhgks\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.894970 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44jnw\" (UniqueName: \"kubernetes.io/projected/6b9da973-6b5f-4485-adca-8792b0a3d256-kube-api-access-44jnw\") pod \"redhat-operators-jhgks\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.895234 4699 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.895263 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.895283 4699 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/796e9631-3388-48b1-8675-3fbc4b6e435d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:10 crc kubenswrapper[4699]: E0226 11:14:10.896089 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:14:11.396075553 +0000 UTC m=+197.206901987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8656" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.896792 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-catalog-content\") pod \"redhat-operators-jhgks\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.897156 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-utilities\") pod \"redhat-operators-jhgks\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.910091 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796e9631-3388-48b1-8675-3fbc4b6e435d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "796e9631-3388-48b1-8675-3fbc4b6e435d" (UID: "796e9631-3388-48b1-8675-3fbc4b6e435d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.919865 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/796e9631-3388-48b1-8675-3fbc4b6e435d-kube-api-access-vjjgq" (OuterVolumeSpecName: "kube-api-access-vjjgq") pod "796e9631-3388-48b1-8675-3fbc4b6e435d" (UID: "796e9631-3388-48b1-8675-3fbc4b6e435d"). InnerVolumeSpecName "kube-api-access-vjjgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.921532 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4168d2c5-00be-4270-9a2b-c2b8847e4593","Type":"ContainerStarted","Data":"17985416ddd6869d379ec85a7e60b0e0d55af715771863737c8600be5d531fb7"} Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.931540 4699 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.932546 4699 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.944369 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44jnw\" (UniqueName: \"kubernetes.io/projected/6b9da973-6b5f-4485-adca-8792b0a3d256-kube-api-access-44jnw\") pod \"redhat-operators-jhgks\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.958752 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.983774 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8kpz"] Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.997443 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.997794 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/796e9631-3388-48b1-8675-3fbc4b6e435d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:10 crc kubenswrapper[4699]: I0226 11:14:10.997808 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjjgq\" (UniqueName: \"kubernetes.io/projected/796e9631-3388-48b1-8675-3fbc4b6e435d-kube-api-access-vjjgq\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.013302 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.038802 4699 generic.go:334] "Generic (PLEG): container finished" podID="744aa737-e6c7-4d6b-ba7d-a9479043ad29" containerID="e82195f6e02bee889f35431d957fa456c0b1db807d039ddcd99b290da7bc9288" exitCode=0 Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.038920 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" event={"ID":"744aa737-e6c7-4d6b-ba7d-a9479043ad29","Type":"ContainerDied","Data":"e82195f6e02bee889f35431d957fa456c0b1db807d039ddcd99b290da7bc9288"} Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.049367 4699 generic.go:334] "Generic (PLEG): container finished" podID="1389c8c4-9546-4193-8067-50db90448d4f" containerID="52ffe1a540a589fb575f8cfc11cab09c8b7aa57c3ace31541c3b66e087bf8460" exitCode=0 Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.049531 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhgnz" event={"ID":"1389c8c4-9546-4193-8067-50db90448d4f","Type":"ContainerDied","Data":"52ffe1a540a589fb575f8cfc11cab09c8b7aa57c3ace31541c3b66e087bf8460"} Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.049566 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhgnz" event={"ID":"1389c8c4-9546-4193-8067-50db90448d4f","Type":"ContainerStarted","Data":"f819282439bed9f63847862724adeef5b7b347bc240dc1336ee32c15da7bf7cc"} Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.056700 4699 generic.go:334] "Generic (PLEG): container finished" podID="71a83978-4f86-404b-967a-0e7493ff6721" containerID="f1b31944470f82af52e860af7004767cf2db0ef2acdf2a9986adc95701213e55" exitCode=0 Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.056934 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzgjj" event={"ID":"71a83978-4f86-404b-967a-0e7493ff6721","Type":"ContainerDied","Data":"f1b31944470f82af52e860af7004767cf2db0ef2acdf2a9986adc95701213e55"} Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.087501 4699 generic.go:334] "Generic (PLEG): container finished" podID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerID="39ff3a6e4269604cce0aea66db001b967d934c0076038e7958d8b015de9375a1" exitCode=0 Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.087615 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czwkc" event={"ID":"ac0026c3-1fad-4b34-9c42-389971f0c773","Type":"ContainerDied","Data":"39ff3a6e4269604cce0aea66db001b967d934c0076038e7958d8b015de9375a1"} Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.098676 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qzphl" event={"ID":"79a9064f-5fcf-42f7-af6f-71aeeb75560e","Type":"ContainerStarted","Data":"d1999d25b37d3ebe60c93058b5602a84f3b965ce72276f7b288ccf6a4dd3ff40"} Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.099600 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.105249 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.111295 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhbz" event={"ID":"9ea10063-7888-400e-af1c-216cbde5a13e","Type":"ContainerStarted","Data":"c8dc58ce346d0f6b6aad0363b33f0cf4112745523923e0e7d1cf3d865b90372a"} Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.124014 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrk4n"] Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.135051 4699 generic.go:334] "Generic (PLEG): container finished" podID="796e9631-3388-48b1-8675-3fbc4b6e435d" containerID="4e6b4035a7e79b8d64117aea9e3cf6e2de88e935585f4e47f14b7523b0476a41" exitCode=0 Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.135196 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" event={"ID":"796e9631-3388-48b1-8675-3fbc4b6e435d","Type":"ContainerDied","Data":"4e6b4035a7e79b8d64117aea9e3cf6e2de88e935585f4e47f14b7523b0476a41"} Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.135236 4699 scope.go:117] "RemoveContainer" containerID="4e6b4035a7e79b8d64117aea9e3cf6e2de88e935585f4e47f14b7523b0476a41" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.135274 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gsl8w" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.147973 4699 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.148019 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.202903 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66m62\" (UniqueName: \"kubernetes.io/projected/744aa737-e6c7-4d6b-ba7d-a9479043ad29-kube-api-access-66m62\") pod \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.202959 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-client-ca\") pod \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.202982 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/744aa737-e6c7-4d6b-ba7d-a9479043ad29-serving-cert\") pod \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.203031 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-config\") pod \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\" (UID: \"744aa737-e6c7-4d6b-ba7d-a9479043ad29\") " Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.205759 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-client-ca" (OuterVolumeSpecName: "client-ca") pod "744aa737-e6c7-4d6b-ba7d-a9479043ad29" (UID: "744aa737-e6c7-4d6b-ba7d-a9479043ad29"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.206371 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-config" (OuterVolumeSpecName: "config") pod "744aa737-e6c7-4d6b-ba7d-a9479043ad29" (UID: "744aa737-e6c7-4d6b-ba7d-a9479043ad29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.223217 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744aa737-e6c7-4d6b-ba7d-a9479043ad29-kube-api-access-66m62" (OuterVolumeSpecName: "kube-api-access-66m62") pod "744aa737-e6c7-4d6b-ba7d-a9479043ad29" (UID: "744aa737-e6c7-4d6b-ba7d-a9479043ad29"). InnerVolumeSpecName "kube-api-access-66m62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.230243 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/744aa737-e6c7-4d6b-ba7d-a9479043ad29-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "744aa737-e6c7-4d6b-ba7d-a9479043ad29" (UID: "744aa737-e6c7-4d6b-ba7d-a9479043ad29"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.238848 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8656\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.264849 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gsl8w"] Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.265237 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gsl8w"] Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.311829 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66m62\" (UniqueName: \"kubernetes.io/projected/744aa737-e6c7-4d6b-ba7d-a9479043ad29-kube-api-access-66m62\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.311867 4699 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.311884 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/744aa737-e6c7-4d6b-ba7d-a9479043ad29-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.311896 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744aa737-e6c7-4d6b-ba7d-a9479043ad29-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.357698 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.360331 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.411211 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.585056 4699 patch_prober.go:28] interesting pod/apiserver-76f77b778f-f8s5j container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 26 11:14:11 crc kubenswrapper[4699]: [+]log ok Feb 26 11:14:11 crc kubenswrapper[4699]: [+]etcd ok Feb 26 11:14:11 crc kubenswrapper[4699]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 26 11:14:11 crc kubenswrapper[4699]: [+]poststarthook/generic-apiserver-start-informers ok Feb 26 11:14:11 crc kubenswrapper[4699]: [+]poststarthook/max-in-flight-filter ok Feb 26 11:14:11 crc kubenswrapper[4699]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 26 11:14:11 crc kubenswrapper[4699]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 26 11:14:11 crc kubenswrapper[4699]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 26 11:14:11 crc kubenswrapper[4699]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 26 11:14:11 crc kubenswrapper[4699]: [+]poststarthook/project.openshift.io-projectcache ok Feb 26 11:14:11 crc kubenswrapper[4699]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 26 11:14:11 crc kubenswrapper[4699]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Feb 26 11:14:11 crc kubenswrapper[4699]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 26 11:14:11 crc kubenswrapper[4699]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 26 11:14:11 crc kubenswrapper[4699]: livez check failed Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.585490 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" podUID="a550b2ea-3ce7-4df3-bbf5-f1025afca8c1" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.657725 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sc9c6"] Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.720688 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jhgks"] Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.730632 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:11 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:11 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:11 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.730679 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.867839 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8656"] Feb 26 11:14:11 crc kubenswrapper[4699]: W0226 11:14:11.901464 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7232eb23_31ae_4e72_ae27_c256dc4cac9a.slice/crio-39ab42bfea1ba6c0800a2508ff52b7eb12199142899ef006de5ffbee4f2135a3 WatchSource:0}: Error finding container 39ab42bfea1ba6c0800a2508ff52b7eb12199142899ef006de5ffbee4f2135a3: Status 404 returned error can't find the container with id 39ab42bfea1ba6c0800a2508ff52b7eb12199142899ef006de5ffbee4f2135a3 Feb 26 11:14:11 crc kubenswrapper[4699]: I0226 11:14:11.970501 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.009356 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60148: no serving certificate available for the kubelet" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.163703 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7","Type":"ContainerStarted","Data":"3f1d1b042714ef5e17278b4b9d7bda264b1ad3f30ee29d94ea5f89d3666dabf8"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.168845 4699 generic.go:334] "Generic (PLEG): container finished" podID="4168d2c5-00be-4270-9a2b-c2b8847e4593" containerID="20bd2ec5dc40708614f63aff6ec3d623d4d0578a8054c95d7cd179f080c036fe" exitCode=0 Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.168920 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4168d2c5-00be-4270-9a2b-c2b8847e4593","Type":"ContainerDied","Data":"20bd2ec5dc40708614f63aff6ec3d623d4d0578a8054c95d7cd179f080c036fe"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.174802 4699 generic.go:334] "Generic (PLEG): container finished" podID="9ea10063-7888-400e-af1c-216cbde5a13e" containerID="e2ca3e75def51c6eedb622aaa6507c8da48849ebf241567dc8e903d48fc3a6e5" exitCode=0 Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.174892 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhbz" event={"ID":"9ea10063-7888-400e-af1c-216cbde5a13e","Type":"ContainerDied","Data":"e2ca3e75def51c6eedb622aaa6507c8da48849ebf241567dc8e903d48fc3a6e5"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.178552 4699 generic.go:334] "Generic (PLEG): container finished" podID="44d171ad-7d92-4c70-a686-65f60ded8a03" containerID="0d415d903af1673dff3ecf368cade4c0a0a93c2b3158c0519393d68509c7e6d3" exitCode=0 Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.178635 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc9c6" event={"ID":"44d171ad-7d92-4c70-a686-65f60ded8a03","Type":"ContainerDied","Data":"0d415d903af1673dff3ecf368cade4c0a0a93c2b3158c0519393d68509c7e6d3"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.178664 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc9c6" event={"ID":"44d171ad-7d92-4c70-a686-65f60ded8a03","Type":"ContainerStarted","Data":"8416abc544344d1375d554f38d43ac67e9642de8063e20464268f9eaf0d51147"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.184216 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.184199884 podStartE2EDuration="3.184199884s" podCreationTimestamp="2026-02-26 11:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:12.182805573 +0000 UTC m=+197.993632027" watchObservedRunningTime="2026-02-26 11:14:12.184199884 +0000 UTC m=+197.995026318" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.185387 4699 generic.go:334] "Generic (PLEG): container finished" podID="6e7ddf51-5522-4085-8567-76c9a254ed15" containerID="3255d554cf00b3f149c14b7b5562baa6c773b2f01ac34c99e514e81d89810bb1" exitCode=0 Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.185538 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrk4n" event={"ID":"6e7ddf51-5522-4085-8567-76c9a254ed15","Type":"ContainerDied","Data":"3255d554cf00b3f149c14b7b5562baa6c773b2f01ac34c99e514e81d89810bb1"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.185569 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrk4n" event={"ID":"6e7ddf51-5522-4085-8567-76c9a254ed15","Type":"ContainerStarted","Data":"64ab7f5c1142b79d1cad6017fda721d048cccdd042121faa577213948620ffa2"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.198821 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qzphl" event={"ID":"79a9064f-5fcf-42f7-af6f-71aeeb75560e","Type":"ContainerStarted","Data":"d95936cc463e6a5cd671e1ed3f7b1e7ee6c426a777f09428f3b6caa620a5e5de"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.202916 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf"] Feb 26 11:14:12 crc kubenswrapper[4699]: E0226 11:14:12.203312 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744aa737-e6c7-4d6b-ba7d-a9479043ad29" containerName="route-controller-manager" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.203340 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="744aa737-e6c7-4d6b-ba7d-a9479043ad29" containerName="route-controller-manager" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.203690 4699 generic.go:334] "Generic (PLEG): container finished" podID="8c96a703-e568-4916-8035-a951ae91dc2b" containerID="0c88d150d726034804b09cdfd6ed7b9a516e4ecd807d5799c0ea12f3955c7b69" exitCode=0 Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.203840 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="744aa737-e6c7-4d6b-ba7d-a9479043ad29" containerName="route-controller-manager" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.204466 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8kpz" event={"ID":"8c96a703-e568-4916-8035-a951ae91dc2b","Type":"ContainerDied","Data":"0c88d150d726034804b09cdfd6ed7b9a516e4ecd807d5799c0ea12f3955c7b69"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.204509 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8kpz" event={"ID":"8c96a703-e568-4916-8035-a951ae91dc2b","Type":"ContainerStarted","Data":"18a720cd12fbf1604976388b722cf7ea85f1660cb3d90ac7f016d51d465b43d1"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.204612 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.207179 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" event={"ID":"744aa737-e6c7-4d6b-ba7d-a9479043ad29","Type":"ContainerDied","Data":"3f258b9ae41f11af5114ab5232e03c4aa9dff40c08fe1e6fde31d40c3ec891ec"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.207225 4699 scope.go:117] "RemoveContainer" containerID="e82195f6e02bee889f35431d957fa456c0b1db807d039ddcd99b290da7bc9288" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.207326 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.208337 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c78ff548b-nppmt"] Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.209290 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.219559 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.219721 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.220213 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.220326 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.220384 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.228788 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.237621 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" event={"ID":"7232eb23-31ae-4e72-ae27-c256dc4cac9a","Type":"ContainerStarted","Data":"39ab42bfea1ba6c0800a2508ff52b7eb12199142899ef006de5ffbee4f2135a3"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.242913 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.248488 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf"] Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.286948 4699 generic.go:334] "Generic (PLEG): container finished" podID="6b9da973-6b5f-4485-adca-8792b0a3d256" containerID="e514effd43a8aac49eb2edbdb6959f6095c102c0f8bc4412986233930c5d5ff6" exitCode=0 Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.292691 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="796e9631-3388-48b1-8675-3fbc4b6e435d" path="/var/lib/kubelet/pods/796e9631-3388-48b1-8675-3fbc4b6e435d/volumes" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.293901 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.294629 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhgks" event={"ID":"6b9da973-6b5f-4485-adca-8792b0a3d256","Type":"ContainerDied","Data":"e514effd43a8aac49eb2edbdb6959f6095c102c0f8bc4412986233930c5d5ff6"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.294861 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c78ff548b-nppmt"] Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.294970 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhgks" event={"ID":"6b9da973-6b5f-4485-adca-8792b0a3d256","Type":"ContainerStarted","Data":"1df59f3f6cf47eeaee6c7803f5d095457eb18adeaca6dc9c81e5b0dfb758e003"} Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.339969 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qzphl" podStartSLOduration=17.339950752 podStartE2EDuration="17.339950752s" podCreationTimestamp="2026-02-26 11:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:12.33818598 +0000 UTC m=+198.149012444" watchObservedRunningTime="2026-02-26 11:14:12.339950752 +0000 UTC m=+198.150777186" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.352266 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-client-ca\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.352574 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2wz\" (UniqueName: \"kubernetes.io/projected/2f265819-8c24-4d84-9afe-423152764dfb-kube-api-access-md2wz\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.352670 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-proxy-ca-bundles\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.352761 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f265819-8c24-4d84-9afe-423152764dfb-serving-cert\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.352842 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-config\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.352949 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v87n7\" (UniqueName: \"kubernetes.io/projected/1873d943-4785-4bc5-a9c4-5a027a932464-kube-api-access-v87n7\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.353034 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-client-ca\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.353109 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1873d943-4785-4bc5-a9c4-5a027a932464-serving-cert\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.353290 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-config\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.415363 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8"] Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.422825 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fq7g8"] Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.455592 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-proxy-ca-bundles\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.455946 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f265819-8c24-4d84-9afe-423152764dfb-serving-cert\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.455999 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-config\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.456105 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v87n7\" (UniqueName: \"kubernetes.io/projected/1873d943-4785-4bc5-a9c4-5a027a932464-kube-api-access-v87n7\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.456155 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1873d943-4785-4bc5-a9c4-5a027a932464-serving-cert\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.456212 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-client-ca\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.456289 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-config\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.456399 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-client-ca\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.456439 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md2wz\" (UniqueName: \"kubernetes.io/projected/2f265819-8c24-4d84-9afe-423152764dfb-kube-api-access-md2wz\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.460645 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-config\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.461501 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-client-ca\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.462238 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-client-ca\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.465344 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-proxy-ca-bundles\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.468572 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1873d943-4785-4bc5-a9c4-5a027a932464-serving-cert\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.471621 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f265819-8c24-4d84-9afe-423152764dfb-serving-cert\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.471782 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-config\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.480567 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md2wz\" (UniqueName: \"kubernetes.io/projected/2f265819-8c24-4d84-9afe-423152764dfb-kube-api-access-md2wz\") pod \"route-controller-manager-59b4784554-77qxf\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.485198 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v87n7\" (UniqueName: \"kubernetes.io/projected/1873d943-4785-4bc5-a9c4-5a027a932464-kube-api-access-v87n7\") pod \"controller-manager-7c78ff548b-nppmt\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.581149 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.624462 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.734512 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:12 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:12 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:12 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:12 crc kubenswrapper[4699]: I0226 11:14:12.734590 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.147767 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf"] Feb 26 11:14:13 crc kubenswrapper[4699]: W0226 11:14:13.175047 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f265819_8c24_4d84_9afe_423152764dfb.slice/crio-ad9a383bdf2560f2e8e699488a34354989214933a94da177587d7b8f57d53fa3 WatchSource:0}: Error finding container ad9a383bdf2560f2e8e699488a34354989214933a94da177587d7b8f57d53fa3: Status 404 returned error can't find the container with id ad9a383bdf2560f2e8e699488a34354989214933a94da177587d7b8f57d53fa3 Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.331258 4699 generic.go:334] "Generic (PLEG): container finished" podID="7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7" containerID="3f1d1b042714ef5e17278b4b9d7bda264b1ad3f30ee29d94ea5f89d3666dabf8" exitCode=0 Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.331634 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7","Type":"ContainerDied","Data":"3f1d1b042714ef5e17278b4b9d7bda264b1ad3f30ee29d94ea5f89d3666dabf8"} Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.337593 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" event={"ID":"7232eb23-31ae-4e72-ae27-c256dc4cac9a","Type":"ContainerStarted","Data":"4779da011a858c6a8df3a7fdfbfb2e01a004c252953c916a440f89808caa4efd"} Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.338413 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.343765 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" event={"ID":"2f265819-8c24-4d84-9afe-423152764dfb","Type":"ContainerStarted","Data":"ad9a383bdf2560f2e8e699488a34354989214933a94da177587d7b8f57d53fa3"} Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.383099 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" podStartSLOduration=147.380797143 podStartE2EDuration="2m27.380797143s" podCreationTimestamp="2026-02-26 11:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:13.379067892 +0000 UTC m=+199.189894416" watchObservedRunningTime="2026-02-26 11:14:13.380797143 +0000 UTC m=+199.191623577" Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.459220 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c78ff548b-nppmt"] Feb 26 11:14:13 crc kubenswrapper[4699]: W0226 11:14:13.487020 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1873d943_4785_4bc5_a9c4_5a027a932464.slice/crio-8c28a63f20c67fc69e31c70403424ed7ca9920d8e519c0772be0d307ed557c9d WatchSource:0}: Error finding container 8c28a63f20c67fc69e31c70403424ed7ca9920d8e519c0772be0d307ed557c9d: Status 404 returned error can't find the container with id 8c28a63f20c67fc69e31c70403424ed7ca9920d8e519c0772be0d307ed557c9d Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.722773 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:13 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:13 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:13 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.722839 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.886029 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.992127 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4168d2c5-00be-4270-9a2b-c2b8847e4593-kubelet-dir\") pod \"4168d2c5-00be-4270-9a2b-c2b8847e4593\" (UID: \"4168d2c5-00be-4270-9a2b-c2b8847e4593\") " Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.992323 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4168d2c5-00be-4270-9a2b-c2b8847e4593-kube-api-access\") pod \"4168d2c5-00be-4270-9a2b-c2b8847e4593\" (UID: \"4168d2c5-00be-4270-9a2b-c2b8847e4593\") " Feb 26 11:14:13 crc kubenswrapper[4699]: I0226 11:14:13.998079 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4168d2c5-00be-4270-9a2b-c2b8847e4593-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4168d2c5-00be-4270-9a2b-c2b8847e4593" (UID: "4168d2c5-00be-4270-9a2b-c2b8847e4593"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:13.999956 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-tnwpn" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.000817 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4168d2c5-00be-4270-9a2b-c2b8847e4593-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4168d2c5-00be-4270-9a2b-c2b8847e4593" (UID: "4168d2c5-00be-4270-9a2b-c2b8847e4593"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.096008 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4168d2c5-00be-4270-9a2b-c2b8847e4593-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.096041 4699 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4168d2c5-00be-4270-9a2b-c2b8847e4593-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.294413 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744aa737-e6c7-4d6b-ba7d-a9479043ad29" path="/var/lib/kubelet/pods/744aa737-e6c7-4d6b-ba7d-a9479043ad29/volumes" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.394479 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4168d2c5-00be-4270-9a2b-c2b8847e4593","Type":"ContainerDied","Data":"17985416ddd6869d379ec85a7e60b0e0d55af715771863737c8600be5d531fb7"} Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.394524 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17985416ddd6869d379ec85a7e60b0e0d55af715771863737c8600be5d531fb7" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.394591 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.426495 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" event={"ID":"2f265819-8c24-4d84-9afe-423152764dfb","Type":"ContainerStarted","Data":"438953ae4c2dd5482a447caa172401f76f9355d9f070a2edd3604d4596edc619"} Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.427337 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.434610 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.437402 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.441503 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" event={"ID":"1873d943-4785-4bc5-a9c4-5a027a932464","Type":"ContainerStarted","Data":"a08711cd4ea22926a99c6ab95e9c9d94bed1d66b8cc11c69332013859eda951b"} Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.441554 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.441569 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" event={"ID":"1873d943-4785-4bc5-a9c4-5a027a932464","Type":"ContainerStarted","Data":"8c28a63f20c67fc69e31c70403424ed7ca9920d8e519c0772be0d307ed557c9d"} Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.445559 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-f8s5j" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.461264 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" podStartSLOduration=4.461245832 podStartE2EDuration="4.461245832s" podCreationTimestamp="2026-02-26 11:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:14.460050217 +0000 UTC m=+200.270876671" watchObservedRunningTime="2026-02-26 11:14:14.461245832 +0000 UTC m=+200.272072276" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.495410 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.501641 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" podStartSLOduration=4.501617254 podStartE2EDuration="4.501617254s" podCreationTimestamp="2026-02-26 11:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:14.497443501 +0000 UTC m=+200.308269945" watchObservedRunningTime="2026-02-26 11:14:14.501617254 +0000 UTC m=+200.312443688" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.704981 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60164: no serving certificate available for the kubelet" Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.726386 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:14 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:14 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:14 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:14 crc kubenswrapper[4699]: I0226 11:14:14.726700 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.395441 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.531960 4699 ???:1] "http: TLS handshake error from 192.168.126.11:60166: no serving certificate available for the kubelet" Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.550702 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.550839 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7","Type":"ContainerDied","Data":"c2ec8f209323100d0e1a79f63a539f1e19839405896844b0751e4c3bb600c54d"} Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.550870 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ec8f209323100d0e1a79f63a539f1e19839405896844b0751e4c3bb600c54d" Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.552773 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kubelet-dir\") pod \"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7\" (UID: \"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7\") " Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.552843 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kube-api-access\") pod \"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7\" (UID: \"7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7\") " Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.554025 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7" (UID: "7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.576835 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7" (UID: "7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.654942 4699 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.655369 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.734462 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:15 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:15 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:15 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:15 crc kubenswrapper[4699]: I0226 11:14:15.734544 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:16 crc kubenswrapper[4699]: I0226 11:14:16.721952 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:16 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:16 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:16 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:16 crc kubenswrapper[4699]: I0226 11:14:16.722001 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:17 crc kubenswrapper[4699]: I0226 11:14:17.726637 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:17 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:17 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:17 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:17 crc kubenswrapper[4699]: I0226 11:14:17.727024 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:18 crc kubenswrapper[4699]: I0226 11:14:18.722257 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:18 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:18 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:18 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:18 crc kubenswrapper[4699]: I0226 11:14:18.722333 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:19 crc kubenswrapper[4699]: I0226 11:14:19.055863 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:19 crc kubenswrapper[4699]: I0226 11:14:19.055916 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:19 crc kubenswrapper[4699]: I0226 11:14:19.055980 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:19 crc kubenswrapper[4699]: I0226 11:14:19.056036 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:19 crc kubenswrapper[4699]: I0226 11:14:19.260331 4699 patch_prober.go:28] interesting pod/console-f9d7485db-hnsh7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 26 11:14:19 crc kubenswrapper[4699]: I0226 11:14:19.260413 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hnsh7" podUID="e6bdcf19-db76-497c-a2fe-a6de38fae724" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 26 11:14:19 crc kubenswrapper[4699]: I0226 11:14:19.721171 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:19 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:19 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:19 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:19 crc kubenswrapper[4699]: I0226 11:14:19.721399 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:19 crc kubenswrapper[4699]: I0226 11:14:19.861552 4699 ???:1] "http: TLS handshake error from 192.168.126.11:47002: no serving certificate available for the kubelet" Feb 26 11:14:20 crc kubenswrapper[4699]: I0226 11:14:20.720366 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:20 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:20 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:20 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:20 crc kubenswrapper[4699]: I0226 11:14:20.720722 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:21 crc kubenswrapper[4699]: I0226 11:14:21.722152 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:21 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:21 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:21 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:21 crc kubenswrapper[4699]: I0226 11:14:21.722213 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:22 crc kubenswrapper[4699]: I0226 11:14:22.727151 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:22 crc kubenswrapper[4699]: [-]has-synced failed: reason withheld Feb 26 11:14:22 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:22 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:22 crc kubenswrapper[4699]: I0226 11:14:22.727236 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:23 crc kubenswrapper[4699]: I0226 11:14:23.725576 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:14:23 crc kubenswrapper[4699]: [+]has-synced ok Feb 26 11:14:23 crc kubenswrapper[4699]: [+]process-running ok Feb 26 11:14:23 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:23 crc kubenswrapper[4699]: I0226 11:14:23.725713 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:24 crc kubenswrapper[4699]: I0226 11:14:24.721633 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:14:24 crc kubenswrapper[4699]: I0226 11:14:24.726051 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xm88w" Feb 26 11:14:29 crc kubenswrapper[4699]: I0226 11:14:28.870175 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:14:29 crc kubenswrapper[4699]: I0226 11:14:28.870276 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:14:29 crc kubenswrapper[4699]: I0226 11:14:28.870341 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:14:29 crc kubenswrapper[4699]: I0226 11:14:28.870388 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:14:29 crc kubenswrapper[4699]: I0226 11:14:29.055581 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:29 crc kubenswrapper[4699]: I0226 11:14:29.055635 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:29 crc kubenswrapper[4699]: I0226 11:14:29.056010 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:29 crc kubenswrapper[4699]: I0226 11:14:29.056027 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:29 crc kubenswrapper[4699]: I0226 11:14:29.254355 4699 patch_prober.go:28] interesting pod/console-f9d7485db-hnsh7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 26 11:14:29 crc kubenswrapper[4699]: I0226 11:14:29.254434 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hnsh7" podUID="e6bdcf19-db76-497c-a2fe-a6de38fae724" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 26 11:14:32 crc kubenswrapper[4699]: E0226 11:14:32.340628 4699 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:14:32 crc kubenswrapper[4699]: E0226 11:14:32.341002 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:16:34.340975723 +0000 UTC m=+340.151802167 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:14:32 crc kubenswrapper[4699]: E0226 11:14:32.341059 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:14:32 crc kubenswrapper[4699]: E0226 11:14:32.341126 4699 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:14:32 crc kubenswrapper[4699]: E0226 11:14:32.341181 4699 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 11:14:32 crc kubenswrapper[4699]: E0226 11:14:32.341247 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:16:34.34122796 +0000 UTC m=+340.152054394 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Feb 26 11:14:32 crc kubenswrapper[4699]: I0226 11:14:32.383438 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]controller ok Feb 26 11:14:32 crc kubenswrapper[4699]: [-]backend-http failed: reason withheld Feb 26 11:14:32 crc kubenswrapper[4699]: healthz check failed Feb 26 11:14:32 crc kubenswrapper[4699]: I0226 11:14:32.383523 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:14:32 crc kubenswrapper[4699]: I0226 11:14:32.383971 4699 patch_prober.go:28] interesting pod/router-default-5444994796-xm88w container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:14:32 crc kubenswrapper[4699]: I0226 11:14:32.384004 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-xm88w" podUID="4a97e310-1811-48a9-a31a-eb9a0321d280" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:14:32 crc kubenswrapper[4699]: I0226 11:14:32.391358 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 11:14:32 crc kubenswrapper[4699]: I0226 11:14:32.391894 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 11:14:32 crc kubenswrapper[4699]: I0226 11:14:32.392014 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 11:14:32 crc kubenswrapper[4699]: E0226 11:14:32.394370 4699 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:14:32 crc kubenswrapper[4699]: E0226 11:14:32.394460 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:16:34.394425371 +0000 UTC m=+340.205251795 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:14:32 crc kubenswrapper[4699]: E0226 11:14:32.394566 4699 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:14:32 crc kubenswrapper[4699]: E0226 11:14:32.394632 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:16:34.394610917 +0000 UTC m=+340.205437351 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:14:32 crc kubenswrapper[4699]: I0226 11:14:32.397693 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 11:14:32 crc kubenswrapper[4699]: E0226 11:14:32.424105 4699 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.529s" Feb 26 11:14:33 crc kubenswrapper[4699]: I0226 11:14:33.554369 4699 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-vzj5b container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": context deadline exceeded" start-of-body= Feb 26 11:14:33 crc kubenswrapper[4699]: I0226 11:14:33.554460 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vzj5b" podUID="03fd3407-9529-4638-89d6-cfc6b703e510" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": context deadline exceeded" Feb 26 11:14:33 crc kubenswrapper[4699]: E0226 11:14:33.725267 4699 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.301s" Feb 26 11:14:33 crc kubenswrapper[4699]: I0226 11:14:33.725413 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-tcnxt" Feb 26 11:14:33 crc kubenswrapper[4699]: I0226 11:14:33.743294 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:14:33 crc kubenswrapper[4699]: I0226 11:14:33.746483 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"a48cf703fa085cd2031065303843172d7d70091d4cf97de0a11e40328102d59a"} pod="openshift-console/downloads-7954f5f757-tcnxt" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 26 11:14:33 crc kubenswrapper[4699]: I0226 11:14:33.752314 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" containerID="cri-o://a48cf703fa085cd2031065303843172d7d70091d4cf97de0a11e40328102d59a" gracePeriod=2 Feb 26 11:14:33 crc kubenswrapper[4699]: I0226 11:14:33.769888 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:33 crc kubenswrapper[4699]: I0226 11:14:33.769992 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:34 crc kubenswrapper[4699]: I0226 11:14:34.505845 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c78ff548b-nppmt"] Feb 26 11:14:34 crc kubenswrapper[4699]: I0226 11:14:34.506152 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" podUID="1873d943-4785-4bc5-a9c4-5a027a932464" containerName="controller-manager" containerID="cri-o://a08711cd4ea22926a99c6ab95e9c9d94bed1d66b8cc11c69332013859eda951b" gracePeriod=30 Feb 26 11:14:34 crc kubenswrapper[4699]: I0226 11:14:34.534392 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf"] Feb 26 11:14:34 crc kubenswrapper[4699]: I0226 11:14:34.534678 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" podUID="2f265819-8c24-4d84-9afe-423152764dfb" containerName="route-controller-manager" containerID="cri-o://438953ae4c2dd5482a447caa172401f76f9355d9f070a2edd3604d4596edc619" gracePeriod=30 Feb 26 11:14:34 crc kubenswrapper[4699]: I0226 11:14:34.754169 4699 generic.go:334] "Generic (PLEG): container finished" podID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerID="a48cf703fa085cd2031065303843172d7d70091d4cf97de0a11e40328102d59a" exitCode=0 Feb 26 11:14:34 crc kubenswrapper[4699]: I0226 11:14:34.754292 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tcnxt" event={"ID":"72b1bc55-f48b-4d90-ab02-3a80438096b6","Type":"ContainerDied","Data":"a48cf703fa085cd2031065303843172d7d70091d4cf97de0a11e40328102d59a"} Feb 26 11:14:34 crc kubenswrapper[4699]: I0226 11:14:34.759161 4699 generic.go:334] "Generic (PLEG): container finished" podID="1873d943-4785-4bc5-a9c4-5a027a932464" containerID="a08711cd4ea22926a99c6ab95e9c9d94bed1d66b8cc11c69332013859eda951b" exitCode=0 Feb 26 11:14:34 crc kubenswrapper[4699]: I0226 11:14:34.759216 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" event={"ID":"1873d943-4785-4bc5-a9c4-5a027a932464","Type":"ContainerDied","Data":"a08711cd4ea22926a99c6ab95e9c9d94bed1d66b8cc11c69332013859eda951b"} Feb 26 11:14:35 crc kubenswrapper[4699]: I0226 11:14:35.898244 4699 generic.go:334] "Generic (PLEG): container finished" podID="2f265819-8c24-4d84-9afe-423152764dfb" containerID="438953ae4c2dd5482a447caa172401f76f9355d9f070a2edd3604d4596edc619" exitCode=0 Feb 26 11:14:35 crc kubenswrapper[4699]: I0226 11:14:35.898331 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" event={"ID":"2f265819-8c24-4d84-9afe-423152764dfb","Type":"ContainerDied","Data":"438953ae4c2dd5482a447caa172401f76f9355d9f070a2edd3604d4596edc619"} Feb 26 11:14:39 crc kubenswrapper[4699]: I0226 11:14:39.066012 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:39 crc kubenswrapper[4699]: I0226 11:14:39.066457 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:39 crc kubenswrapper[4699]: I0226 11:14:39.071889 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dk749" Feb 26 11:14:39 crc kubenswrapper[4699]: I0226 11:14:39.266610 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:14:39 crc kubenswrapper[4699]: I0226 11:14:39.275674 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:14:40 crc kubenswrapper[4699]: I0226 11:14:40.443219 4699 ???:1] "http: TLS handshake error from 192.168.126.11:40548: no serving certificate available for the kubelet" Feb 26 11:14:41 crc kubenswrapper[4699]: I0226 11:14:41.585232 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:14:41 crc kubenswrapper[4699]: I0226 11:14:41.585292 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:14:42 crc kubenswrapper[4699]: I0226 11:14:42.626960 4699 patch_prober.go:28] interesting pod/route-controller-manager-59b4784554-77qxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 26 11:14:42 crc kubenswrapper[4699]: I0226 11:14:42.627442 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" podUID="2f265819-8c24-4d84-9afe-423152764dfb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 26 11:14:42 crc kubenswrapper[4699]: I0226 11:14:42.628078 4699 patch_prober.go:28] interesting pod/controller-manager-7c78ff548b-nppmt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 26 11:14:42 crc kubenswrapper[4699]: I0226 11:14:42.628160 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" podUID="1873d943-4785-4bc5-a9c4-5a027a932464" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.576605 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 11:14:45 crc kubenswrapper[4699]: E0226 11:14:45.576899 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4168d2c5-00be-4270-9a2b-c2b8847e4593" containerName="pruner" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.576916 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="4168d2c5-00be-4270-9a2b-c2b8847e4593" containerName="pruner" Feb 26 11:14:45 crc kubenswrapper[4699]: E0226 11:14:45.576930 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7" containerName="pruner" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.576939 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7" containerName="pruner" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.577087 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="4168d2c5-00be-4270-9a2b-c2b8847e4593" containerName="pruner" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.577101 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6f5fd6-ce1d-48d1-bb78-237e07a93ff7" containerName="pruner" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.577486 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.579622 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.580102 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.590582 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.750743 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e999a971-660e-4244-8ff3-5d41795bd7f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e999a971-660e-4244-8ff3-5d41795bd7f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.750830 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e999a971-660e-4244-8ff3-5d41795bd7f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e999a971-660e-4244-8ff3-5d41795bd7f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.852605 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e999a971-660e-4244-8ff3-5d41795bd7f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e999a971-660e-4244-8ff3-5d41795bd7f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.852793 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e999a971-660e-4244-8ff3-5d41795bd7f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e999a971-660e-4244-8ff3-5d41795bd7f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.852937 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e999a971-660e-4244-8ff3-5d41795bd7f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e999a971-660e-4244-8ff3-5d41795bd7f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.892715 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e999a971-660e-4244-8ff3-5d41795bd7f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e999a971-660e-4244-8ff3-5d41795bd7f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:14:45 crc kubenswrapper[4699]: I0226 11:14:45.910226 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:14:49 crc kubenswrapper[4699]: I0226 11:14:49.057735 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:49 crc kubenswrapper[4699]: I0226 11:14:49.057839 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:14:50 crc kubenswrapper[4699]: I0226 11:14:50.782809 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 11:14:50 crc kubenswrapper[4699]: I0226 11:14:50.784363 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:14:50 crc kubenswrapper[4699]: I0226 11:14:50.788262 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 11:14:50 crc kubenswrapper[4699]: I0226 11:14:50.881188 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-var-lock\") pod \"installer-9-crc\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:14:50 crc kubenswrapper[4699]: I0226 11:14:50.881333 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:14:50 crc kubenswrapper[4699]: I0226 11:14:50.881435 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a904aa73-23d7-4994-882a-4afafe02fb82-kube-api-access\") pod \"installer-9-crc\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:14:50 crc kubenswrapper[4699]: I0226 11:14:50.983382 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a904aa73-23d7-4994-882a-4afafe02fb82-kube-api-access\") pod \"installer-9-crc\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:14:50 crc kubenswrapper[4699]: I0226 11:14:50.983459 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-var-lock\") pod \"installer-9-crc\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:14:50 crc kubenswrapper[4699]: I0226 11:14:50.983534 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:14:50 crc kubenswrapper[4699]: I0226 11:14:50.983615 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:14:50 crc kubenswrapper[4699]: I0226 11:14:50.983952 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-var-lock\") pod \"installer-9-crc\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:14:51 crc kubenswrapper[4699]: I0226 11:14:51.002716 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a904aa73-23d7-4994-882a-4afafe02fb82-kube-api-access\") pod \"installer-9-crc\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:14:51 crc kubenswrapper[4699]: I0226 11:14:51.122627 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:14:52 crc kubenswrapper[4699]: I0226 11:14:52.583355 4699 patch_prober.go:28] interesting pod/route-controller-manager-59b4784554-77qxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 26 11:14:52 crc kubenswrapper[4699]: I0226 11:14:52.584247 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" podUID="2f265819-8c24-4d84-9afe-423152764dfb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 26 11:14:52 crc kubenswrapper[4699]: I0226 11:14:52.625455 4699 patch_prober.go:28] interesting pod/controller-manager-7c78ff548b-nppmt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 26 11:14:52 crc kubenswrapper[4699]: I0226 11:14:52.625517 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" podUID="1873d943-4785-4bc5-a9c4-5a027a932464" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 26 11:14:59 crc kubenswrapper[4699]: I0226 11:14:59.056751 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:14:59 crc kubenswrapper[4699]: I0226 11:14:59.057141 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.151166 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4"] Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.153043 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.155722 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.156940 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.158637 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4"] Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.284331 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed8aec36-74ad-4c69-baf8-d672010495e9-secret-volume\") pod \"collect-profiles-29535075-hl4g4\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.284392 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed8aec36-74ad-4c69-baf8-d672010495e9-config-volume\") pod \"collect-profiles-29535075-hl4g4\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.284467 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kmjp\" (UniqueName: \"kubernetes.io/projected/ed8aec36-74ad-4c69-baf8-d672010495e9-kube-api-access-7kmjp\") pod \"collect-profiles-29535075-hl4g4\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.385719 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed8aec36-74ad-4c69-baf8-d672010495e9-secret-volume\") pod \"collect-profiles-29535075-hl4g4\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.385777 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed8aec36-74ad-4c69-baf8-d672010495e9-config-volume\") pod \"collect-profiles-29535075-hl4g4\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.385851 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kmjp\" (UniqueName: \"kubernetes.io/projected/ed8aec36-74ad-4c69-baf8-d672010495e9-kube-api-access-7kmjp\") pod \"collect-profiles-29535075-hl4g4\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.630034 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed8aec36-74ad-4c69-baf8-d672010495e9-config-volume\") pod \"collect-profiles-29535075-hl4g4\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.639659 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kmjp\" (UniqueName: \"kubernetes.io/projected/ed8aec36-74ad-4c69-baf8-d672010495e9-kube-api-access-7kmjp\") pod \"collect-profiles-29535075-hl4g4\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.640174 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed8aec36-74ad-4c69-baf8-d672010495e9-secret-volume\") pod \"collect-profiles-29535075-hl4g4\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:00 crc kubenswrapper[4699]: I0226 11:15:00.776722 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:02 crc kubenswrapper[4699]: I0226 11:15:02.642382 4699 patch_prober.go:28] interesting pod/controller-manager-7c78ff548b-nppmt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 26 11:15:02 crc kubenswrapper[4699]: I0226 11:15:02.642946 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" podUID="1873d943-4785-4bc5-a9c4-5a027a932464" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 26 11:15:03 crc kubenswrapper[4699]: I0226 11:15:03.639923 4699 patch_prober.go:28] interesting pod/route-controller-manager-59b4784554-77qxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:15:03 crc kubenswrapper[4699]: I0226 11:15:03.640650 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" podUID="2f265819-8c24-4d84-9afe-423152764dfb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.589861 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.625103 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz"] Feb 26 11:15:08 crc kubenswrapper[4699]: E0226 11:15:08.625759 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f265819-8c24-4d84-9afe-423152764dfb" containerName="route-controller-manager" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.625817 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f265819-8c24-4d84-9afe-423152764dfb" containerName="route-controller-manager" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.625986 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f265819-8c24-4d84-9afe-423152764dfb" containerName="route-controller-manager" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.626701 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.649486 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz"] Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.718591 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-client-ca\") pod \"2f265819-8c24-4d84-9afe-423152764dfb\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.718667 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md2wz\" (UniqueName: \"kubernetes.io/projected/2f265819-8c24-4d84-9afe-423152764dfb-kube-api-access-md2wz\") pod \"2f265819-8c24-4d84-9afe-423152764dfb\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.718735 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-config\") pod \"2f265819-8c24-4d84-9afe-423152764dfb\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.718798 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f265819-8c24-4d84-9afe-423152764dfb-serving-cert\") pod \"2f265819-8c24-4d84-9afe-423152764dfb\" (UID: \"2f265819-8c24-4d84-9afe-423152764dfb\") " Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.719031 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8wsx\" (UniqueName: \"kubernetes.io/projected/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-kube-api-access-z8wsx\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.719128 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-serving-cert\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.719185 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-config\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.719242 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-client-ca\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.720486 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-config" (OuterVolumeSpecName: "config") pod "2f265819-8c24-4d84-9afe-423152764dfb" (UID: "2f265819-8c24-4d84-9afe-423152764dfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.721215 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-client-ca" (OuterVolumeSpecName: "client-ca") pod "2f265819-8c24-4d84-9afe-423152764dfb" (UID: "2f265819-8c24-4d84-9afe-423152764dfb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.736030 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f265819-8c24-4d84-9afe-423152764dfb-kube-api-access-md2wz" (OuterVolumeSpecName: "kube-api-access-md2wz") pod "2f265819-8c24-4d84-9afe-423152764dfb" (UID: "2f265819-8c24-4d84-9afe-423152764dfb"). InnerVolumeSpecName "kube-api-access-md2wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.820928 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-config\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.821020 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-client-ca\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.821102 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8wsx\" (UniqueName: \"kubernetes.io/projected/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-kube-api-access-z8wsx\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.821188 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-serving-cert\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.821278 4699 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.821297 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md2wz\" (UniqueName: \"kubernetes.io/projected/2f265819-8c24-4d84-9afe-423152764dfb-kube-api-access-md2wz\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.821314 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f265819-8c24-4d84-9afe-423152764dfb-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.861763 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-config\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.863091 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-client-ca\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.879011 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-serving-cert\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.879341 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f265819-8c24-4d84-9afe-423152764dfb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2f265819-8c24-4d84-9afe-423152764dfb" (UID: "2f265819-8c24-4d84-9afe-423152764dfb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:08 crc kubenswrapper[4699]: I0226 11:15:08.900228 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8wsx\" (UniqueName: \"kubernetes.io/projected/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-kube-api-access-z8wsx\") pod \"route-controller-manager-7d67f9fbb8-7gsgz\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:09 crc kubenswrapper[4699]: I0226 11:15:08.924255 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f265819-8c24-4d84-9afe-423152764dfb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:09 crc kubenswrapper[4699]: I0226 11:15:09.164015 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:09 crc kubenswrapper[4699]: I0226 11:15:09.164765 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:15:09 crc kubenswrapper[4699]: I0226 11:15:09.164807 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:15:09 crc kubenswrapper[4699]: I0226 11:15:09.181076 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" event={"ID":"2f265819-8c24-4d84-9afe-423152764dfb","Type":"ContainerDied","Data":"ad9a383bdf2560f2e8e699488a34354989214933a94da177587d7b8f57d53fa3"} Feb 26 11:15:09 crc kubenswrapper[4699]: I0226 11:15:09.181147 4699 scope.go:117] "RemoveContainer" containerID="438953ae4c2dd5482a447caa172401f76f9355d9f070a2edd3604d4596edc619" Feb 26 11:15:09 crc kubenswrapper[4699]: I0226 11:15:09.181301 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf" Feb 26 11:15:09 crc kubenswrapper[4699]: I0226 11:15:09.348308 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf"] Feb 26 11:15:09 crc kubenswrapper[4699]: I0226 11:15:09.352093 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b4784554-77qxf"] Feb 26 11:15:10 crc kubenswrapper[4699]: I0226 11:15:10.386985 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f265819-8c24-4d84-9afe-423152764dfb" path="/var/lib/kubelet/pods/2f265819-8c24-4d84-9afe-423152764dfb/volumes" Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.584764 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.584911 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:15:11 crc kubenswrapper[4699]: E0226 11:15:11.683835 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 26 11:15:11 crc kubenswrapper[4699]: E0226 11:15:11.684452 4699 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 11:15:11 crc kubenswrapper[4699]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 26 11:15:11 crc kubenswrapper[4699]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qq8lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29535074-bjfld_openshift-infra(30d444da-9127-459c-97c6-cdcff5b20e67): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 26 11:15:11 crc kubenswrapper[4699]: > logger="UnhandledError" Feb 26 11:15:11 crc kubenswrapper[4699]: E0226 11:15:11.686067 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29535074-bjfld" podUID="30d444da-9127-459c-97c6-cdcff5b20e67" Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.696429 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.929234 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v87n7\" (UniqueName: \"kubernetes.io/projected/1873d943-4785-4bc5-a9c4-5a027a932464-kube-api-access-v87n7\") pod \"1873d943-4785-4bc5-a9c4-5a027a932464\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.929704 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1873d943-4785-4bc5-a9c4-5a027a932464-serving-cert\") pod \"1873d943-4785-4bc5-a9c4-5a027a932464\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.929764 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-config\") pod \"1873d943-4785-4bc5-a9c4-5a027a932464\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.929861 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-client-ca\") pod \"1873d943-4785-4bc5-a9c4-5a027a932464\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.929921 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-proxy-ca-bundles\") pod \"1873d943-4785-4bc5-a9c4-5a027a932464\" (UID: \"1873d943-4785-4bc5-a9c4-5a027a932464\") " Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.931014 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1873d943-4785-4bc5-a9c4-5a027a932464" (UID: "1873d943-4785-4bc5-a9c4-5a027a932464"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.932375 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-config" (OuterVolumeSpecName: "config") pod "1873d943-4785-4bc5-a9c4-5a027a932464" (UID: "1873d943-4785-4bc5-a9c4-5a027a932464"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.947830 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1873d943-4785-4bc5-a9c4-5a027a932464-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1873d943-4785-4bc5-a9c4-5a027a932464" (UID: "1873d943-4785-4bc5-a9c4-5a027a932464"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.948754 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-client-ca" (OuterVolumeSpecName: "client-ca") pod "1873d943-4785-4bc5-a9c4-5a027a932464" (UID: "1873d943-4785-4bc5-a9c4-5a027a932464"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:11 crc kubenswrapper[4699]: I0226 11:15:11.952378 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1873d943-4785-4bc5-a9c4-5a027a932464-kube-api-access-v87n7" (OuterVolumeSpecName: "kube-api-access-v87n7") pod "1873d943-4785-4bc5-a9c4-5a027a932464" (UID: "1873d943-4785-4bc5-a9c4-5a027a932464"). InnerVolumeSpecName "kube-api-access-v87n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:11.970652 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bcd6f597b-s4crp"] Feb 26 11:15:12 crc kubenswrapper[4699]: E0226 11:15:11.972414 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1873d943-4785-4bc5-a9c4-5a027a932464" containerName="controller-manager" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:11.972452 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="1873d943-4785-4bc5-a9c4-5a027a932464" containerName="controller-manager" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:11.974362 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="1873d943-4785-4bc5-a9c4-5a027a932464" containerName="controller-manager" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:11.975272 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:11.979026 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bcd6f597b-s4crp"] Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.031367 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-client-ca\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.031663 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-config\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.031729 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-proxy-ca-bundles\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.031858 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4d1ddf-c814-4b93-972e-bffff61f9170-serving-cert\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.031909 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkb74\" (UniqueName: \"kubernetes.io/projected/eb4d1ddf-c814-4b93-972e-bffff61f9170-kube-api-access-lkb74\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.032029 4699 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.032070 4699 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.032092 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v87n7\" (UniqueName: \"kubernetes.io/projected/1873d943-4785-4bc5-a9c4-5a027a932464-kube-api-access-v87n7\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.032105 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1873d943-4785-4bc5-a9c4-5a027a932464-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.032131 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1873d943-4785-4bc5-a9c4-5a027a932464-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.133390 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4d1ddf-c814-4b93-972e-bffff61f9170-serving-cert\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.133484 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkb74\" (UniqueName: \"kubernetes.io/projected/eb4d1ddf-c814-4b93-972e-bffff61f9170-kube-api-access-lkb74\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.133536 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-client-ca\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.133627 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-config\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.133662 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-proxy-ca-bundles\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.135019 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-client-ca\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.135390 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-proxy-ca-bundles\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.135647 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-config\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.137783 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4d1ddf-c814-4b93-972e-bffff61f9170-serving-cert\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.159745 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkb74\" (UniqueName: \"kubernetes.io/projected/eb4d1ddf-c814-4b93-972e-bffff61f9170-kube-api-access-lkb74\") pod \"controller-manager-7bcd6f597b-s4crp\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.331539 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.426561 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" event={"ID":"1873d943-4785-4bc5-a9c4-5a027a932464","Type":"ContainerDied","Data":"8c28a63f20c67fc69e31c70403424ed7ca9920d8e519c0772be0d307ed557c9d"} Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.426604 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c78ff548b-nppmt" Feb 26 11:15:12 crc kubenswrapper[4699]: E0226 11:15:12.432248 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29535074-bjfld" podUID="30d444da-9127-459c-97c6-cdcff5b20e67" Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.461469 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c78ff548b-nppmt"] Feb 26 11:15:12 crc kubenswrapper[4699]: I0226 11:15:12.465534 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c78ff548b-nppmt"] Feb 26 11:15:14 crc kubenswrapper[4699]: I0226 11:15:14.348616 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1873d943-4785-4bc5-a9c4-5a027a932464" path="/var/lib/kubelet/pods/1873d943-4785-4bc5-a9c4-5a027a932464/volumes" Feb 26 11:15:19 crc kubenswrapper[4699]: I0226 11:15:19.055520 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:15:19 crc kubenswrapper[4699]: I0226 11:15:19.056308 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:15:19 crc kubenswrapper[4699]: I0226 11:15:19.406189 4699 scope.go:117] "RemoveContainer" containerID="a08711cd4ea22926a99c6ab95e9c9d94bed1d66b8cc11c69332013859eda951b" Feb 26 11:15:20 crc kubenswrapper[4699]: I0226 11:15:20.599808 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4"] Feb 26 11:15:20 crc kubenswrapper[4699]: I0226 11:15:20.616968 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 11:15:20 crc kubenswrapper[4699]: I0226 11:15:20.628208 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 11:15:21 crc kubenswrapper[4699]: I0226 11:15:21.427933 4699 ???:1] "http: TLS handshake error from 192.168.126.11:41566: no serving certificate available for the kubelet" Feb 26 11:15:26 crc kubenswrapper[4699]: E0226 11:15:26.072197 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 11:15:26 crc kubenswrapper[4699]: E0226 11:15:26.073169 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9z6wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mzgjj_openshift-marketplace(71a83978-4f86-404b-967a-0e7493ff6721): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:15:26 crc kubenswrapper[4699]: E0226 11:15:26.074722 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mzgjj" podUID="71a83978-4f86-404b-967a-0e7493ff6721" Feb 26 11:15:27 crc kubenswrapper[4699]: E0226 11:15:27.633844 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 11:15:27 crc kubenswrapper[4699]: E0226 11:15:27.634281 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hnhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fhgnz_openshift-marketplace(1389c8c4-9546-4193-8067-50db90448d4f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:15:27 crc kubenswrapper[4699]: E0226 11:15:27.635448 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fhgnz" podUID="1389c8c4-9546-4193-8067-50db90448d4f" Feb 26 11:15:28 crc kubenswrapper[4699]: I0226 11:15:28.548752 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-22qbz"] Feb 26 11:15:29 crc kubenswrapper[4699]: I0226 11:15:29.056404 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:15:29 crc kubenswrapper[4699]: I0226 11:15:29.056502 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:15:39 crc kubenswrapper[4699]: I0226 11:15:39.056205 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:15:39 crc kubenswrapper[4699]: I0226 11:15:39.056750 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:15:41 crc kubenswrapper[4699]: I0226 11:15:41.585471 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:15:41 crc kubenswrapper[4699]: I0226 11:15:41.586057 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:15:41 crc kubenswrapper[4699]: I0226 11:15:41.586207 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:15:41 crc kubenswrapper[4699]: I0226 11:15:41.588220 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:15:41 crc kubenswrapper[4699]: I0226 11:15:41.588431 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4" gracePeriod=600 Feb 26 11:15:44 crc kubenswrapper[4699]: I0226 11:15:44.209676 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4" exitCode=0 Feb 26 11:15:44 crc kubenswrapper[4699]: I0226 11:15:44.209794 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4"} Feb 26 11:15:44 crc kubenswrapper[4699]: E0226 11:15:44.368619 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 11:15:44 crc kubenswrapper[4699]: E0226 11:15:44.368815 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tqhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sc9c6_openshift-marketplace(44d171ad-7d92-4c70-a686-65f60ded8a03): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:15:44 crc kubenswrapper[4699]: E0226 11:15:44.369998 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sc9c6" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" Feb 26 11:15:47 crc kubenswrapper[4699]: E0226 11:15:47.092413 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sc9c6" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" Feb 26 11:15:47 crc kubenswrapper[4699]: I0226 11:15:47.233473 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a904aa73-23d7-4994-882a-4afafe02fb82","Type":"ContainerStarted","Data":"c6277d115f7a8e7d06e98e6fbf746a8f5f67a2bf9660b521fc6a925c224a7f1a"} Feb 26 11:15:47 crc kubenswrapper[4699]: I0226 11:15:47.236019 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" event={"ID":"ed8aec36-74ad-4c69-baf8-d672010495e9","Type":"ContainerStarted","Data":"ef7f4740e98b0b8517cf802c045aedd86853560276f43770af8b78d775aa6c30"} Feb 26 11:15:47 crc kubenswrapper[4699]: I0226 11:15:47.236865 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e999a971-660e-4244-8ff3-5d41795bd7f1","Type":"ContainerStarted","Data":"8647ab112fe5d72e6317d33357b8faf5f04c7e9ece66676a3eb1dd1a578be5e7"} Feb 26 11:15:47 crc kubenswrapper[4699]: I0226 11:15:47.299433 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz"] Feb 26 11:15:47 crc kubenswrapper[4699]: W0226 11:15:47.316668 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb7738e1_5c72_401d_ba71_9ae3b1d9d266.slice/crio-5afbad926c55c3da8aaf47419aa634cd696fc133f69e9ec92350e49c1022647d WatchSource:0}: Error finding container 5afbad926c55c3da8aaf47419aa634cd696fc133f69e9ec92350e49c1022647d: Status 404 returned error can't find the container with id 5afbad926c55c3da8aaf47419aa634cd696fc133f69e9ec92350e49c1022647d Feb 26 11:15:47 crc kubenswrapper[4699]: I0226 11:15:47.393730 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bcd6f597b-s4crp"] Feb 26 11:15:48 crc kubenswrapper[4699]: I0226 11:15:48.242855 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" event={"ID":"fb7738e1-5c72-401d-ba71-9ae3b1d9d266","Type":"ContainerStarted","Data":"5afbad926c55c3da8aaf47419aa634cd696fc133f69e9ec92350e49c1022647d"} Feb 26 11:15:48 crc kubenswrapper[4699]: I0226 11:15:48.243686 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" event={"ID":"eb4d1ddf-c814-4b93-972e-bffff61f9170","Type":"ContainerStarted","Data":"2891ab62ab9758bb7b4c92c3675a2020c320b414be74c46ec576f65a5d1c42f1"} Feb 26 11:15:48 crc kubenswrapper[4699]: I0226 11:15:48.688064 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bcd6f597b-s4crp"] Feb 26 11:15:48 crc kubenswrapper[4699]: I0226 11:15:48.789065 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz"] Feb 26 11:15:49 crc kubenswrapper[4699]: I0226 11:15:49.055513 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:15:49 crc kubenswrapper[4699]: I0226 11:15:49.055577 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:15:50 crc kubenswrapper[4699]: I0226 11:15:50.257332 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" event={"ID":"eb4d1ddf-c814-4b93-972e-bffff61f9170","Type":"ContainerStarted","Data":"71dd1fba88e532cb5564144c451ba1f550ea9fc371b535e240792729f2f00e75"} Feb 26 11:15:50 crc kubenswrapper[4699]: I0226 11:15:50.258987 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a904aa73-23d7-4994-882a-4afafe02fb82","Type":"ContainerStarted","Data":"3fd7c462ca5ff5bad3835e30c617b483507570293909f8328947b3fbdead2389"} Feb 26 11:15:50 crc kubenswrapper[4699]: I0226 11:15:50.265892 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" event={"ID":"fb7738e1-5c72-401d-ba71-9ae3b1d9d266","Type":"ContainerStarted","Data":"42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c"} Feb 26 11:15:50 crc kubenswrapper[4699]: I0226 11:15:50.266014 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" podUID="fb7738e1-5c72-401d-ba71-9ae3b1d9d266" containerName="route-controller-manager" containerID="cri-o://42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c" gracePeriod=30 Feb 26 11:15:50 crc kubenswrapper[4699]: I0226 11:15:50.266712 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:50 crc kubenswrapper[4699]: I0226 11:15:50.278383 4699 generic.go:334] "Generic (PLEG): container finished" podID="ed8aec36-74ad-4c69-baf8-d672010495e9" containerID="1a649c81866f7635a569ca368b86ef4aadb641a91575dd77e87694a700822950" exitCode=0 Feb 26 11:15:50 crc kubenswrapper[4699]: I0226 11:15:50.278447 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" event={"ID":"ed8aec36-74ad-4c69-baf8-d672010495e9","Type":"ContainerDied","Data":"1a649c81866f7635a569ca368b86ef4aadb641a91575dd77e87694a700822950"} Feb 26 11:15:50 crc kubenswrapper[4699]: I0226 11:15:50.279636 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e999a971-660e-4244-8ff3-5d41795bd7f1","Type":"ContainerStarted","Data":"043f1a99306eeb89179e5d095bad024d7a81e0a392209e6ed07047c4d32579cd"} Feb 26 11:15:50 crc kubenswrapper[4699]: I0226 11:15:50.281283 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tcnxt" event={"ID":"72b1bc55-f48b-4d90-ab02-3a80438096b6","Type":"ContainerStarted","Data":"c4a4d68b91ccaab01354dac36137a42576c82cee0d31b4d795e6c8dc0cda8b68"} Feb 26 11:15:50 crc kubenswrapper[4699]: I0226 11:15:50.331830 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" podStartSLOduration=62.331806418 podStartE2EDuration="1m2.331806418s" podCreationTimestamp="2026-02-26 11:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:15:50.307939426 +0000 UTC m=+296.118765880" watchObservedRunningTime="2026-02-26 11:15:50.331806418 +0000 UTC m=+296.142632862" Feb 26 11:15:50 crc kubenswrapper[4699]: E0226 11:15:50.360486 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 11:15:50 crc kubenswrapper[4699]: E0226 11:15:50.360690 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr725,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s8kpz_openshift-marketplace(8c96a703-e568-4916-8035-a951ae91dc2b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:15:50 crc kubenswrapper[4699]: E0226 11:15:50.362744 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-s8kpz" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" Feb 26 11:15:51 crc kubenswrapper[4699]: I0226 11:15:51.080880 4699 patch_prober.go:28] interesting pod/route-controller-manager-7d67f9fbb8-7gsgz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": read tcp 10.217.0.2:58574->10.217.0.61:8443: read: connection reset by peer" start-of-body= Feb 26 11:15:51 crc kubenswrapper[4699]: I0226 11:15:51.081281 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" podUID="fb7738e1-5c72-401d-ba71-9ae3b1d9d266" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": read tcp 10.217.0.2:58574->10.217.0.61:8443: read: connection reset by peer" Feb 26 11:15:51 crc kubenswrapper[4699]: I0226 11:15:51.289170 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"650d424704999ccaef77ddc678846c35c1a480092b312ddf8beddcd52de6fa7e"} Feb 26 11:15:51 crc kubenswrapper[4699]: I0226 11:15:51.289438 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" podUID="eb4d1ddf-c814-4b93-972e-bffff61f9170" containerName="controller-manager" containerID="cri-o://71dd1fba88e532cb5564144c451ba1f550ea9fc371b535e240792729f2f00e75" gracePeriod=30 Feb 26 11:15:51 crc kubenswrapper[4699]: I0226 11:15:51.290440 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:15:51 crc kubenswrapper[4699]: I0226 11:15:51.290481 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.293085 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s8kpz" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" Feb 26 11:15:51 crc kubenswrapper[4699]: I0226 11:15:51.322153 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" podStartSLOduration=63.322135377 podStartE2EDuration="1m3.322135377s" podCreationTimestamp="2026-02-26 11:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:15:51.320504179 +0000 UTC m=+297.131330633" watchObservedRunningTime="2026-02-26 11:15:51.322135377 +0000 UTC m=+297.132961811" Feb 26 11:15:51 crc kubenswrapper[4699]: I0226 11:15:51.339185 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=61.339161061 podStartE2EDuration="1m1.339161061s" podCreationTimestamp="2026-02-26 11:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:15:51.335486734 +0000 UTC m=+297.146313198" watchObservedRunningTime="2026-02-26 11:15:51.339161061 +0000 UTC m=+297.149987505" Feb 26 11:15:51 crc kubenswrapper[4699]: I0226 11:15:51.379359 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=66.379336497 podStartE2EDuration="1m6.379336497s" podCreationTimestamp="2026-02-26 11:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:15:51.377146923 +0000 UTC m=+297.187973357" watchObservedRunningTime="2026-02-26 11:15:51.379336497 +0000 UTC m=+297.190162941" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.714766 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.715296 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44jnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jhgks_openshift-marketplace(6b9da973-6b5f-4485-adca-8792b0a3d256): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.716547 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jhgks" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.941213 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.941407 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-699tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-phhbz_openshift-marketplace(9ea10063-7888-400e-af1c-216cbde5a13e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.943632 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-phhbz" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.987563 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.987760 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xhdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hrk4n_openshift-marketplace(6e7ddf51-5522-4085-8567-76c9a254ed15): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.988990 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hrk4n" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.997647 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.997788 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqrqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-czwkc_openshift-marketplace(ac0026c3-1fad-4b34-9c42-389971f0c773): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:15:51 crc kubenswrapper[4699]: E0226 11:15:51.999001 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-czwkc" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.139148 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7d67f9fbb8-7gsgz_fb7738e1-5c72-401d-ba71-9ae3b1d9d266/route-controller-manager/0.log" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.139213 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.153667 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.172525 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8wsx\" (UniqueName: \"kubernetes.io/projected/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-kube-api-access-z8wsx\") pod \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.172580 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-config\") pod \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.172725 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-serving-cert\") pod \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.172770 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-client-ca\") pod \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\" (UID: \"fb7738e1-5c72-401d-ba71-9ae3b1d9d266\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.174265 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-client-ca" (OuterVolumeSpecName: "client-ca") pod "fb7738e1-5c72-401d-ba71-9ae3b1d9d266" (UID: "fb7738e1-5c72-401d-ba71-9ae3b1d9d266"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.174934 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-config" (OuterVolumeSpecName: "config") pod "fb7738e1-5c72-401d-ba71-9ae3b1d9d266" (UID: "fb7738e1-5c72-401d-ba71-9ae3b1d9d266"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.182019 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-kube-api-access-z8wsx" (OuterVolumeSpecName: "kube-api-access-z8wsx") pod "fb7738e1-5c72-401d-ba71-9ae3b1d9d266" (UID: "fb7738e1-5c72-401d-ba71-9ae3b1d9d266"). InnerVolumeSpecName "kube-api-access-z8wsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.183781 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt"] Feb 26 11:15:52 crc kubenswrapper[4699]: E0226 11:15:52.184131 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8aec36-74ad-4c69-baf8-d672010495e9" containerName="collect-profiles" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.184149 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8aec36-74ad-4c69-baf8-d672010495e9" containerName="collect-profiles" Feb 26 11:15:52 crc kubenswrapper[4699]: E0226 11:15:52.184164 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7738e1-5c72-401d-ba71-9ae3b1d9d266" containerName="route-controller-manager" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.184175 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7738e1-5c72-401d-ba71-9ae3b1d9d266" containerName="route-controller-manager" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.184352 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8aec36-74ad-4c69-baf8-d672010495e9" containerName="collect-profiles" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.184378 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb7738e1-5c72-401d-ba71-9ae3b1d9d266" containerName="route-controller-manager" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.184985 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.185994 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fb7738e1-5c72-401d-ba71-9ae3b1d9d266" (UID: "fb7738e1-5c72-401d-ba71-9ae3b1d9d266"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.197142 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt"] Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.274439 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed8aec36-74ad-4c69-baf8-d672010495e9-secret-volume\") pod \"ed8aec36-74ad-4c69-baf8-d672010495e9\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.274525 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kmjp\" (UniqueName: \"kubernetes.io/projected/ed8aec36-74ad-4c69-baf8-d672010495e9-kube-api-access-7kmjp\") pod \"ed8aec36-74ad-4c69-baf8-d672010495e9\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.274582 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed8aec36-74ad-4c69-baf8-d672010495e9-config-volume\") pod \"ed8aec36-74ad-4c69-baf8-d672010495e9\" (UID: \"ed8aec36-74ad-4c69-baf8-d672010495e9\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.274807 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54vbq\" (UniqueName: \"kubernetes.io/projected/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-kube-api-access-54vbq\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.274845 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-client-ca\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.274890 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-config\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.274929 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-serving-cert\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.274974 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.274992 4699 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.275002 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8wsx\" (UniqueName: \"kubernetes.io/projected/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-kube-api-access-z8wsx\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.275014 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb7738e1-5c72-401d-ba71-9ae3b1d9d266-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.276019 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8aec36-74ad-4c69-baf8-d672010495e9-config-volume" (OuterVolumeSpecName: "config-volume") pod "ed8aec36-74ad-4c69-baf8-d672010495e9" (UID: "ed8aec36-74ad-4c69-baf8-d672010495e9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.278975 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8aec36-74ad-4c69-baf8-d672010495e9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ed8aec36-74ad-4c69-baf8-d672010495e9" (UID: "ed8aec36-74ad-4c69-baf8-d672010495e9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.279200 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8aec36-74ad-4c69-baf8-d672010495e9-kube-api-access-7kmjp" (OuterVolumeSpecName: "kube-api-access-7kmjp") pod "ed8aec36-74ad-4c69-baf8-d672010495e9" (UID: "ed8aec36-74ad-4c69-baf8-d672010495e9"). InnerVolumeSpecName "kube-api-access-7kmjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.295405 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" event={"ID":"ed8aec36-74ad-4c69-baf8-d672010495e9","Type":"ContainerDied","Data":"ef7f4740e98b0b8517cf802c045aedd86853560276f43770af8b78d775aa6c30"} Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.295452 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef7f4740e98b0b8517cf802c045aedd86853560276f43770af8b78d775aa6c30" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.295491 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.297282 4699 generic.go:334] "Generic (PLEG): container finished" podID="e999a971-660e-4244-8ff3-5d41795bd7f1" containerID="043f1a99306eeb89179e5d095bad024d7a81e0a392209e6ed07047c4d32579cd" exitCode=0 Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.297389 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e999a971-660e-4244-8ff3-5d41795bd7f1","Type":"ContainerDied","Data":"043f1a99306eeb89179e5d095bad024d7a81e0a392209e6ed07047c4d32579cd"} Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.303632 4699 generic.go:334] "Generic (PLEG): container finished" podID="eb4d1ddf-c814-4b93-972e-bffff61f9170" containerID="71dd1fba88e532cb5564144c451ba1f550ea9fc371b535e240792729f2f00e75" exitCode=0 Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.303745 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" event={"ID":"eb4d1ddf-c814-4b93-972e-bffff61f9170","Type":"ContainerDied","Data":"71dd1fba88e532cb5564144c451ba1f550ea9fc371b535e240792729f2f00e75"} Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.305277 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7d67f9fbb8-7gsgz_fb7738e1-5c72-401d-ba71-9ae3b1d9d266/route-controller-manager/0.log" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.305333 4699 generic.go:334] "Generic (PLEG): container finished" podID="fb7738e1-5c72-401d-ba71-9ae3b1d9d266" containerID="42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c" exitCode=255 Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.305379 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" event={"ID":"fb7738e1-5c72-401d-ba71-9ae3b1d9d266","Type":"ContainerDied","Data":"42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c"} Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.305409 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.305422 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz" event={"ID":"fb7738e1-5c72-401d-ba71-9ae3b1d9d266","Type":"ContainerDied","Data":"5afbad926c55c3da8aaf47419aa634cd696fc133f69e9ec92350e49c1022647d"} Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.305443 4699 scope.go:117] "RemoveContainer" containerID="42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.333725 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.343273 4699 patch_prober.go:28] interesting pod/controller-manager-7bcd6f597b-s4crp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.343360 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" podUID="eb4d1ddf-c814-4b93-972e-bffff61f9170" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.376830 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-serving-cert\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.376990 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54vbq\" (UniqueName: \"kubernetes.io/projected/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-kube-api-access-54vbq\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.377060 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-client-ca\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.377305 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-config\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.377388 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kmjp\" (UniqueName: \"kubernetes.io/projected/ed8aec36-74ad-4c69-baf8-d672010495e9-kube-api-access-7kmjp\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.377403 4699 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed8aec36-74ad-4c69-baf8-d672010495e9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.377416 4699 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed8aec36-74ad-4c69-baf8-d672010495e9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.382428 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-client-ca\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.383758 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-config\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.386321 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-serving-cert\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.399710 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54vbq\" (UniqueName: \"kubernetes.io/projected/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-kube-api-access-54vbq\") pod \"route-controller-manager-5dd48cdbf5-ckczt\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.411200 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz"] Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.414779 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d67f9fbb8-7gsgz"] Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.518919 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:52 crc kubenswrapper[4699]: E0226 11:15:52.580010 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-phhbz" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" Feb 26 11:15:52 crc kubenswrapper[4699]: E0226 11:15:52.580070 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-czwkc" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.580196 4699 scope.go:117] "RemoveContainer" containerID="42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c" Feb 26 11:15:52 crc kubenswrapper[4699]: E0226 11:15:52.580282 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hrk4n" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" Feb 26 11:15:52 crc kubenswrapper[4699]: E0226 11:15:52.580408 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jhgks" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" Feb 26 11:15:52 crc kubenswrapper[4699]: E0226 11:15:52.580568 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c\": container with ID starting with 42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c not found: ID does not exist" containerID="42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.580686 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c"} err="failed to get container status \"42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c\": rpc error: code = NotFound desc = could not find container \"42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c\": container with ID starting with 42f38f097b8ede853f7ded75c70a15cea1617b056009499cf9728bf4be448b9c not found: ID does not exist" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.591585 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.681070 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkb74\" (UniqueName: \"kubernetes.io/projected/eb4d1ddf-c814-4b93-972e-bffff61f9170-kube-api-access-lkb74\") pod \"eb4d1ddf-c814-4b93-972e-bffff61f9170\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.681747 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-proxy-ca-bundles\") pod \"eb4d1ddf-c814-4b93-972e-bffff61f9170\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.681832 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4d1ddf-c814-4b93-972e-bffff61f9170-serving-cert\") pod \"eb4d1ddf-c814-4b93-972e-bffff61f9170\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.681886 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-config\") pod \"eb4d1ddf-c814-4b93-972e-bffff61f9170\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.681919 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-client-ca\") pod \"eb4d1ddf-c814-4b93-972e-bffff61f9170\" (UID: \"eb4d1ddf-c814-4b93-972e-bffff61f9170\") " Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.683335 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-client-ca" (OuterVolumeSpecName: "client-ca") pod "eb4d1ddf-c814-4b93-972e-bffff61f9170" (UID: "eb4d1ddf-c814-4b93-972e-bffff61f9170"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.684626 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-config" (OuterVolumeSpecName: "config") pod "eb4d1ddf-c814-4b93-972e-bffff61f9170" (UID: "eb4d1ddf-c814-4b93-972e-bffff61f9170"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.685012 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eb4d1ddf-c814-4b93-972e-bffff61f9170" (UID: "eb4d1ddf-c814-4b93-972e-bffff61f9170"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.688227 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb4d1ddf-c814-4b93-972e-bffff61f9170-kube-api-access-lkb74" (OuterVolumeSpecName: "kube-api-access-lkb74") pod "eb4d1ddf-c814-4b93-972e-bffff61f9170" (UID: "eb4d1ddf-c814-4b93-972e-bffff61f9170"). InnerVolumeSpecName "kube-api-access-lkb74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.689081 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb4d1ddf-c814-4b93-972e-bffff61f9170-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eb4d1ddf-c814-4b93-972e-bffff61f9170" (UID: "eb4d1ddf-c814-4b93-972e-bffff61f9170"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.783528 4699 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.783823 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkb74\" (UniqueName: \"kubernetes.io/projected/eb4d1ddf-c814-4b93-972e-bffff61f9170-kube-api-access-lkb74\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.783837 4699 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.783848 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4d1ddf-c814-4b93-972e-bffff61f9170-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:52 crc kubenswrapper[4699]: I0226 11:15:52.783858 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4d1ddf-c814-4b93-972e-bffff61f9170-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.002568 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt"] Feb 26 11:15:53 crc kubenswrapper[4699]: W0226 11:15:53.009309 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf1b992_7a07_4490_bf91_a0e2f802d6aa.slice/crio-fc9c825381387b5b1c159e4f7a59a6327fb0accf80fe9e5362f7a82129337743 WatchSource:0}: Error finding container fc9c825381387b5b1c159e4f7a59a6327fb0accf80fe9e5362f7a82129337743: Status 404 returned error can't find the container with id fc9c825381387b5b1c159e4f7a59a6327fb0accf80fe9e5362f7a82129337743 Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.314612 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" event={"ID":"eb4d1ddf-c814-4b93-972e-bffff61f9170","Type":"ContainerDied","Data":"2891ab62ab9758bb7b4c92c3675a2020c320b414be74c46ec576f65a5d1c42f1"} Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.314675 4699 scope.go:117] "RemoveContainer" containerID="71dd1fba88e532cb5564144c451ba1f550ea9fc371b535e240792729f2f00e75" Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.314660 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bcd6f597b-s4crp" Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.318139 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535074-bjfld" event={"ID":"30d444da-9127-459c-97c6-cdcff5b20e67","Type":"ContainerStarted","Data":"19a60f72e3a64feb9f04d813b42f9a20a08e1ed258c497a9b61b68ad603f4b5b"} Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.320024 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" event={"ID":"bcf1b992-7a07-4490-bf91-a0e2f802d6aa","Type":"ContainerStarted","Data":"3bdc2b92ccb0248bc419987c54d581fc1cccfbe419459f4758200553b00ea095"} Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.320062 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" event={"ID":"bcf1b992-7a07-4490-bf91-a0e2f802d6aa","Type":"ContainerStarted","Data":"fc9c825381387b5b1c159e4f7a59a6327fb0accf80fe9e5362f7a82129337743"} Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.352952 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535074-bjfld" podStartSLOduration=2.567809177 podStartE2EDuration="1m53.352931508s" podCreationTimestamp="2026-02-26 11:14:00 +0000 UTC" firstStartedPulling="2026-02-26 11:14:01.806605069 +0000 UTC m=+187.617431503" lastFinishedPulling="2026-02-26 11:15:52.59172741 +0000 UTC m=+298.402553834" observedRunningTime="2026-02-26 11:15:53.339418156 +0000 UTC m=+299.150244600" watchObservedRunningTime="2026-02-26 11:15:53.352931508 +0000 UTC m=+299.163757952" Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.355840 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bcd6f597b-s4crp"] Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.359908 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bcd6f597b-s4crp"] Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.512315 4699 csr.go:261] certificate signing request csr-299v4 is approved, waiting to be issued Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.520608 4699 csr.go:257] certificate signing request csr-299v4 is issued Feb 26 11:15:53 crc kubenswrapper[4699]: I0226 11:15:53.591322 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" podUID="94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" containerName="oauth-openshift" containerID="cri-o://74570cc7e5f47cfb5ae78c7040168924d22c48d5892dd1918f787cd4639c996c" gracePeriod=15 Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.268849 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb4d1ddf-c814-4b93-972e-bffff61f9170" path="/var/lib/kubelet/pods/eb4d1ddf-c814-4b93-972e-bffff61f9170/volumes" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.269460 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb7738e1-5c72-401d-ba71-9ae3b1d9d266" path="/var/lib/kubelet/pods/fb7738e1-5c72-401d-ba71-9ae3b1d9d266/volumes" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.328656 4699 generic.go:334] "Generic (PLEG): container finished" podID="94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" containerID="74570cc7e5f47cfb5ae78c7040168924d22c48d5892dd1918f787cd4639c996c" exitCode=0 Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.328773 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" event={"ID":"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466","Type":"ContainerDied","Data":"74570cc7e5f47cfb5ae78c7040168924d22c48d5892dd1918f787cd4639c996c"} Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.331507 4699 generic.go:334] "Generic (PLEG): container finished" podID="30d444da-9127-459c-97c6-cdcff5b20e67" containerID="19a60f72e3a64feb9f04d813b42f9a20a08e1ed258c497a9b61b68ad603f4b5b" exitCode=0 Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.331592 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535074-bjfld" event={"ID":"30d444da-9127-459c-97c6-cdcff5b20e67","Type":"ContainerDied","Data":"19a60f72e3a64feb9f04d813b42f9a20a08e1ed258c497a9b61b68ad603f4b5b"} Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.333226 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.338017 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.353611 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" podStartSLOduration=6.353592195 podStartE2EDuration="6.353592195s" podCreationTimestamp="2026-02-26 11:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:15:54.353361779 +0000 UTC m=+300.164188233" watchObservedRunningTime="2026-02-26 11:15:54.353592195 +0000 UTC m=+300.164418629" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.522417 4699 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-23 12:46:37.430995219 +0000 UTC Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.522472 4699 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7201h30m42.908527273s for next certificate rotation Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.611359 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk"] Feb 26 11:15:54 crc kubenswrapper[4699]: E0226 11:15:54.611616 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4d1ddf-c814-4b93-972e-bffff61f9170" containerName="controller-manager" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.611630 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4d1ddf-c814-4b93-972e-bffff61f9170" containerName="controller-manager" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.611772 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb4d1ddf-c814-4b93-972e-bffff61f9170" containerName="controller-manager" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.612208 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.615686 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.615885 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.616078 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.616345 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.616576 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.616721 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.624791 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk"] Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.625170 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.711236 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-config\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.711354 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-proxy-ca-bundles\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.711404 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv9tf\" (UniqueName: \"kubernetes.io/projected/0d17836b-1dda-4b03-8417-7025a21b7f0f-kube-api-access-rv9tf\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.711447 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d17836b-1dda-4b03-8417-7025a21b7f0f-serving-cert\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.711473 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-client-ca\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.813442 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-config\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.814006 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-proxy-ca-bundles\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.814057 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9tf\" (UniqueName: \"kubernetes.io/projected/0d17836b-1dda-4b03-8417-7025a21b7f0f-kube-api-access-rv9tf\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.814102 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d17836b-1dda-4b03-8417-7025a21b7f0f-serving-cert\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.814139 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-client-ca\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.814839 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-config\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.815681 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-client-ca\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.816054 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-proxy-ca-bundles\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.822048 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d17836b-1dda-4b03-8417-7025a21b7f0f-serving-cert\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.831523 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv9tf\" (UniqueName: \"kubernetes.io/projected/0d17836b-1dda-4b03-8417-7025a21b7f0f-kube-api-access-rv9tf\") pod \"controller-manager-75fbbf96d5-6r6lk\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:54 crc kubenswrapper[4699]: I0226 11:15:54.966543 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:15:55 crc kubenswrapper[4699]: E0226 11:15:55.282655 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:15:55 crc kubenswrapper[4699]: E0226 11:15:55.298453 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:15:55 crc kubenswrapper[4699]: E0226 11:15:55.309424 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:15:55 crc kubenswrapper[4699]: I0226 11:15:55.522938 4699 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-13 03:12:07.136687041 +0000 UTC Feb 26 11:15:55 crc kubenswrapper[4699]: I0226 11:15:55.523000 4699 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6951h56m11.613690571s for next certificate rotation Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.056170 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tcnxt" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.056411 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.057181 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.056884 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.057264 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.057905 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.057999 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.156739 4699 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-22qbz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.156813 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" podUID="94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.208690 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.289439 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e999a971-660e-4244-8ff3-5d41795bd7f1-kubelet-dir\") pod \"e999a971-660e-4244-8ff3-5d41795bd7f1\" (UID: \"e999a971-660e-4244-8ff3-5d41795bd7f1\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.289572 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e999a971-660e-4244-8ff3-5d41795bd7f1-kube-api-access\") pod \"e999a971-660e-4244-8ff3-5d41795bd7f1\" (UID: \"e999a971-660e-4244-8ff3-5d41795bd7f1\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.289566 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e999a971-660e-4244-8ff3-5d41795bd7f1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e999a971-660e-4244-8ff3-5d41795bd7f1" (UID: "e999a971-660e-4244-8ff3-5d41795bd7f1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.289982 4699 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e999a971-660e-4244-8ff3-5d41795bd7f1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.296026 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e999a971-660e-4244-8ff3-5d41795bd7f1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e999a971-660e-4244-8ff3-5d41795bd7f1" (UID: "e999a971-660e-4244-8ff3-5d41795bd7f1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.356353 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535074-bjfld" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.366567 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.366637 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e999a971-660e-4244-8ff3-5d41795bd7f1","Type":"ContainerDied","Data":"8647ab112fe5d72e6317d33357b8faf5f04c7e9ece66676a3eb1dd1a578be5e7"} Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.366834 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8647ab112fe5d72e6317d33357b8faf5f04c7e9ece66676a3eb1dd1a578be5e7" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.369915 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535074-bjfld" event={"ID":"30d444da-9127-459c-97c6-cdcff5b20e67","Type":"ContainerDied","Data":"18076c1c5e0cfb7ca48ea66321abbc8359663b222708aa29c8481673d9c4ff5c"} Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.369972 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18076c1c5e0cfb7ca48ea66321abbc8359663b222708aa29c8481673d9c4ff5c" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.370016 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535074-bjfld" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.390691 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq8lx\" (UniqueName: \"kubernetes.io/projected/30d444da-9127-459c-97c6-cdcff5b20e67-kube-api-access-qq8lx\") pod \"30d444da-9127-459c-97c6-cdcff5b20e67\" (UID: \"30d444da-9127-459c-97c6-cdcff5b20e67\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.391127 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e999a971-660e-4244-8ff3-5d41795bd7f1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.396085 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d444da-9127-459c-97c6-cdcff5b20e67-kube-api-access-qq8lx" (OuterVolumeSpecName: "kube-api-access-qq8lx") pod "30d444da-9127-459c-97c6-cdcff5b20e67" (UID: "30d444da-9127-459c-97c6-cdcff5b20e67"). InnerVolumeSpecName "kube-api-access-qq8lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.492367 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq8lx\" (UniqueName: \"kubernetes.io/projected/30d444da-9127-459c-97c6-cdcff5b20e67-kube-api-access-qq8lx\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.682091 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.746393 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk"] Feb 26 11:15:59 crc kubenswrapper[4699]: W0226 11:15:59.750901 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d17836b_1dda_4b03_8417_7025a21b7f0f.slice/crio-49b49b629531d2d3d10e7e6211aa14269de0c64c913e3b2b4346368fd08d5a2b WatchSource:0}: Error finding container 49b49b629531d2d3d10e7e6211aa14269de0c64c913e3b2b4346368fd08d5a2b: Status 404 returned error can't find the container with id 49b49b629531d2d3d10e7e6211aa14269de0c64c913e3b2b4346368fd08d5a2b Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.796581 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-idp-0-file-data\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.796629 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-login\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.796674 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-dir\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.796701 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-policies\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.796732 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-provider-selection\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.796766 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-service-ca\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.796799 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.796827 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-trusted-ca-bundle\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.796961 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-router-certs\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.796993 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-session\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.797029 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-serving-cert\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.797056 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-cliconfig\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.797093 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-error\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.797180 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-ocp-branding-template\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.797252 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7knj\" (UniqueName: \"kubernetes.io/projected/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-kube-api-access-q7knj\") pod \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\" (UID: \"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466\") " Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.797691 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.797830 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.798182 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.798313 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.799019 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.799042 4699 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.802408 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.802623 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.803004 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.803340 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.803408 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.803431 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-kube-api-access-q7knj" (OuterVolumeSpecName: "kube-api-access-q7knj") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "kube-api-access-q7knj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.803536 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.804018 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.805659 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" (UID: "94f9b9d1-e4e9-4cd5-8606-80f57ee5c466"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900048 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7knj\" (UniqueName: \"kubernetes.io/projected/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-kube-api-access-q7knj\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900098 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900127 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900142 4699 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900155 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900170 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900181 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900193 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900205 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900217 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900229 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:59 crc kubenswrapper[4699]: I0226 11:15:59.900242 4699 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.137364 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535076-rv9x5"] Feb 26 11:16:00 crc kubenswrapper[4699]: E0226 11:16:00.137697 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e999a971-660e-4244-8ff3-5d41795bd7f1" containerName="pruner" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.137713 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e999a971-660e-4244-8ff3-5d41795bd7f1" containerName="pruner" Feb 26 11:16:00 crc kubenswrapper[4699]: E0226 11:16:00.137725 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" containerName="oauth-openshift" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.137734 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" containerName="oauth-openshift" Feb 26 11:16:00 crc kubenswrapper[4699]: E0226 11:16:00.137761 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d444da-9127-459c-97c6-cdcff5b20e67" containerName="oc" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.137771 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d444da-9127-459c-97c6-cdcff5b20e67" containerName="oc" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.137905 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" containerName="oauth-openshift" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.137940 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e999a971-660e-4244-8ff3-5d41795bd7f1" containerName="pruner" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.137950 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d444da-9127-459c-97c6-cdcff5b20e67" containerName="oc" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.138510 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.141299 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.141552 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.142945 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.148903 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535076-rv9x5"] Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.204162 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knl5z\" (UniqueName: \"kubernetes.io/projected/0d9d78c8-4193-47a8-9ed9-208f6dc25831-kube-api-access-knl5z\") pod \"auto-csr-approver-29535076-rv9x5\" (UID: \"0d9d78c8-4193-47a8-9ed9-208f6dc25831\") " pod="openshift-infra/auto-csr-approver-29535076-rv9x5" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.305613 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knl5z\" (UniqueName: \"kubernetes.io/projected/0d9d78c8-4193-47a8-9ed9-208f6dc25831-kube-api-access-knl5z\") pod \"auto-csr-approver-29535076-rv9x5\" (UID: \"0d9d78c8-4193-47a8-9ed9-208f6dc25831\") " pod="openshift-infra/auto-csr-approver-29535076-rv9x5" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.327442 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knl5z\" (UniqueName: \"kubernetes.io/projected/0d9d78c8-4193-47a8-9ed9-208f6dc25831-kube-api-access-knl5z\") pod \"auto-csr-approver-29535076-rv9x5\" (UID: \"0d9d78c8-4193-47a8-9ed9-208f6dc25831\") " pod="openshift-infra/auto-csr-approver-29535076-rv9x5" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.378319 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" event={"ID":"0d17836b-1dda-4b03-8417-7025a21b7f0f","Type":"ContainerStarted","Data":"24a7d1423c70d2629cc84a6d309405af35280dc1c8f65cb9cbedd726c5936739"} Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.378375 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" event={"ID":"0d17836b-1dda-4b03-8417-7025a21b7f0f","Type":"ContainerStarted","Data":"49b49b629531d2d3d10e7e6211aa14269de0c64c913e3b2b4346368fd08d5a2b"} Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.379918 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" event={"ID":"94f9b9d1-e4e9-4cd5-8606-80f57ee5c466","Type":"ContainerDied","Data":"9d7ac90385fbaeacd88791e44cd5f3dbc802f7727daac69d69660d2d1079d013"} Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.379962 4699 scope.go:117] "RemoveContainer" containerID="74570cc7e5f47cfb5ae78c7040168924d22c48d5892dd1918f787cd4639c996c" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.380145 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-22qbz" Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.411355 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-22qbz"] Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.416975 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-22qbz"] Feb 26 11:16:00 crc kubenswrapper[4699]: I0226 11:16:00.497522 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" Feb 26 11:16:01 crc kubenswrapper[4699]: I0226 11:16:01.023108 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535076-rv9x5"] Feb 26 11:16:01 crc kubenswrapper[4699]: I0226 11:16:01.386124 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" event={"ID":"0d9d78c8-4193-47a8-9ed9-208f6dc25831","Type":"ContainerStarted","Data":"d9a56a2a86268af382b046874040445b48c2975a953e5d204ab0a77f6c325fdc"} Feb 26 11:16:01 crc kubenswrapper[4699]: I0226 11:16:01.391176 4699 generic.go:334] "Generic (PLEG): container finished" podID="1389c8c4-9546-4193-8067-50db90448d4f" containerID="576debb0d3d58f5281816cda92fedce6f78492ddc1301cf006959585594f82b9" exitCode=0 Feb 26 11:16:01 crc kubenswrapper[4699]: I0226 11:16:01.391245 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhgnz" event={"ID":"1389c8c4-9546-4193-8067-50db90448d4f","Type":"ContainerDied","Data":"576debb0d3d58f5281816cda92fedce6f78492ddc1301cf006959585594f82b9"} Feb 26 11:16:01 crc kubenswrapper[4699]: I0226 11:16:01.395929 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzgjj" event={"ID":"71a83978-4f86-404b-967a-0e7493ff6721","Type":"ContainerStarted","Data":"f41fa5d8badc750f1371bec0896b93547f2bd25c6f1942a17a10cfb9c1edba94"} Feb 26 11:16:01 crc kubenswrapper[4699]: I0226 11:16:01.395970 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:16:01 crc kubenswrapper[4699]: I0226 11:16:01.406928 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:16:01 crc kubenswrapper[4699]: I0226 11:16:01.469932 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" podStartSLOduration=13.469915361 podStartE2EDuration="13.469915361s" podCreationTimestamp="2026-02-26 11:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:16:01.46470423 +0000 UTC m=+307.275530664" watchObservedRunningTime="2026-02-26 11:16:01.469915361 +0000 UTC m=+307.280741795" Feb 26 11:16:02 crc kubenswrapper[4699]: I0226 11:16:02.268646 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f9b9d1-e4e9-4cd5-8606-80f57ee5c466" path="/var/lib/kubelet/pods/94f9b9d1-e4e9-4cd5-8606-80f57ee5c466/volumes" Feb 26 11:16:02 crc kubenswrapper[4699]: I0226 11:16:02.402225 4699 generic.go:334] "Generic (PLEG): container finished" podID="71a83978-4f86-404b-967a-0e7493ff6721" containerID="f41fa5d8badc750f1371bec0896b93547f2bd25c6f1942a17a10cfb9c1edba94" exitCode=0 Feb 26 11:16:02 crc kubenswrapper[4699]: I0226 11:16:02.402333 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzgjj" event={"ID":"71a83978-4f86-404b-967a-0e7493ff6721","Type":"ContainerDied","Data":"f41fa5d8badc750f1371bec0896b93547f2bd25c6f1942a17a10cfb9c1edba94"} Feb 26 11:16:06 crc kubenswrapper[4699]: I0226 11:16:06.260694 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:16:06 crc kubenswrapper[4699]: I0226 11:16:06.437525 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhgnz" event={"ID":"1389c8c4-9546-4193-8067-50db90448d4f","Type":"ContainerStarted","Data":"b8eedef066fa8aaa6df130360e7ae91b6c35c65386cf0e1eb331ae24b87e6305"} Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.464993 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fhgnz" podStartSLOduration=6.040778383 podStartE2EDuration="2m0.464974119s" podCreationTimestamp="2026-02-26 11:14:07 +0000 UTC" firstStartedPulling="2026-02-26 11:14:11.064554627 +0000 UTC m=+196.875381061" lastFinishedPulling="2026-02-26 11:16:05.488750363 +0000 UTC m=+311.299576797" observedRunningTime="2026-02-26 11:16:07.462864418 +0000 UTC m=+313.273690872" watchObservedRunningTime="2026-02-26 11:16:07.464974119 +0000 UTC m=+313.275800573" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.618452 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-f54c45747-bbg8s"] Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.619498 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.625432 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.625596 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.625674 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.625812 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.627192 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.627474 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.627522 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.627572 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.627614 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.627856 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.628103 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.628226 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.634345 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.639178 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.641936 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f54c45747-bbg8s"] Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.692933 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796134 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqjbv\" (UniqueName: \"kubernetes.io/projected/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-kube-api-access-zqjbv\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796267 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796362 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-session\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796393 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796418 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796508 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796618 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-router-certs\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796683 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796721 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796774 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-template-error\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796803 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-template-login\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796844 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-audit-policies\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796875 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-service-ca\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.796930 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-audit-dir\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.897980 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.898046 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-session\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.898067 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.898085 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.898135 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.898162 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-router-certs\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.898846 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.899454 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.899592 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.899680 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-template-error\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.899688 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.899732 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-template-login\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.899763 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-audit-policies\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.899902 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-service-ca\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.899986 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-audit-dir\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.900062 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqjbv\" (UniqueName: \"kubernetes.io/projected/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-kube-api-access-zqjbv\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.900508 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-audit-policies\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.900570 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-audit-dir\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.900976 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-service-ca\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.904904 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-router-certs\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.905495 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.905549 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-session\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.906334 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-template-error\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.906541 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.908562 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-user-template-login\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.919672 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.919913 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.921266 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqjbv\" (UniqueName: \"kubernetes.io/projected/c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a-kube-api-access-zqjbv\") pod \"oauth-openshift-f54c45747-bbg8s\" (UID: \"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a\") " pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:07 crc kubenswrapper[4699]: I0226 11:16:07.938219 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:08 crc kubenswrapper[4699]: I0226 11:16:08.117492 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:16:08 crc kubenswrapper[4699]: I0226 11:16:08.117555 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:16:08 crc kubenswrapper[4699]: I0226 11:16:08.798394 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk"] Feb 26 11:16:08 crc kubenswrapper[4699]: I0226 11:16:08.798955 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" podUID="0d17836b-1dda-4b03-8417-7025a21b7f0f" containerName="controller-manager" containerID="cri-o://24a7d1423c70d2629cc84a6d309405af35280dc1c8f65cb9cbedd726c5936739" gracePeriod=30 Feb 26 11:16:08 crc kubenswrapper[4699]: I0226 11:16:08.829176 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt"] Feb 26 11:16:08 crc kubenswrapper[4699]: I0226 11:16:08.829765 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" podUID="bcf1b992-7a07-4490-bf91-a0e2f802d6aa" containerName="route-controller-manager" containerID="cri-o://3bdc2b92ccb0248bc419987c54d581fc1cccfbe419459f4758200553b00ea095" gracePeriod=30 Feb 26 11:16:09 crc kubenswrapper[4699]: I0226 11:16:09.056180 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:16:09 crc kubenswrapper[4699]: I0226 11:16:09.056245 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:16:09 crc kubenswrapper[4699]: I0226 11:16:09.056183 4699 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcnxt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 26 11:16:09 crc kubenswrapper[4699]: I0226 11:16:09.056527 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tcnxt" podUID="72b1bc55-f48b-4d90-ab02-3a80438096b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 26 11:16:09 crc kubenswrapper[4699]: I0226 11:16:09.260554 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:16:10 crc kubenswrapper[4699]: I0226 11:16:10.260347 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:16:10 crc kubenswrapper[4699]: I0226 11:16:10.469664 4699 generic.go:334] "Generic (PLEG): container finished" podID="bcf1b992-7a07-4490-bf91-a0e2f802d6aa" containerID="3bdc2b92ccb0248bc419987c54d581fc1cccfbe419459f4758200553b00ea095" exitCode=0 Feb 26 11:16:10 crc kubenswrapper[4699]: I0226 11:16:10.469748 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" event={"ID":"bcf1b992-7a07-4490-bf91-a0e2f802d6aa","Type":"ContainerDied","Data":"3bdc2b92ccb0248bc419987c54d581fc1cccfbe419459f4758200553b00ea095"} Feb 26 11:16:10 crc kubenswrapper[4699]: I0226 11:16:10.470803 4699 generic.go:334] "Generic (PLEG): container finished" podID="0d17836b-1dda-4b03-8417-7025a21b7f0f" containerID="24a7d1423c70d2629cc84a6d309405af35280dc1c8f65cb9cbedd726c5936739" exitCode=0 Feb 26 11:16:10 crc kubenswrapper[4699]: I0226 11:16:10.470829 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" event={"ID":"0d17836b-1dda-4b03-8417-7025a21b7f0f","Type":"ContainerDied","Data":"24a7d1423c70d2629cc84a6d309405af35280dc1c8f65cb9cbedd726c5936739"} Feb 26 11:16:11 crc kubenswrapper[4699]: I0226 11:16:11.133197 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fhgnz" podUID="1389c8c4-9546-4193-8067-50db90448d4f" containerName="registry-server" probeResult="failure" output=< Feb 26 11:16:11 crc kubenswrapper[4699]: timeout: failed to connect service ":50051" within 1s Feb 26 11:16:11 crc kubenswrapper[4699]: > Feb 26 11:16:12 crc kubenswrapper[4699]: I0226 11:16:12.520661 4699 patch_prober.go:28] interesting pod/route-controller-manager-5dd48cdbf5-ckczt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 26 11:16:12 crc kubenswrapper[4699]: I0226 11:16:12.520751 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" podUID="bcf1b992-7a07-4490-bf91-a0e2f802d6aa" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 26 11:16:14 crc kubenswrapper[4699]: I0226 11:16:14.968046 4699 patch_prober.go:28] interesting pod/controller-manager-75fbbf96d5-6r6lk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Feb 26 11:16:14 crc kubenswrapper[4699]: I0226 11:16:14.968489 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" podUID="0d17836b-1dda-4b03-8417-7025a21b7f0f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.855522 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.864836 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.888564 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz"] Feb 26 11:16:15 crc kubenswrapper[4699]: E0226 11:16:15.888880 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d17836b-1dda-4b03-8417-7025a21b7f0f" containerName="controller-manager" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.888904 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d17836b-1dda-4b03-8417-7025a21b7f0f" containerName="controller-manager" Feb 26 11:16:15 crc kubenswrapper[4699]: E0226 11:16:15.888922 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf1b992-7a07-4490-bf91-a0e2f802d6aa" containerName="route-controller-manager" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.888933 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf1b992-7a07-4490-bf91-a0e2f802d6aa" containerName="route-controller-manager" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.889072 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf1b992-7a07-4490-bf91-a0e2f802d6aa" containerName="route-controller-manager" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.889093 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d17836b-1dda-4b03-8417-7025a21b7f0f" containerName="controller-manager" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.889674 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.909401 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz"] Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.945587 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54vbq\" (UniqueName: \"kubernetes.io/projected/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-kube-api-access-54vbq\") pod \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.945676 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-proxy-ca-bundles\") pod \"0d17836b-1dda-4b03-8417-7025a21b7f0f\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.945914 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/869c67c3-005d-47f5-9dc7-9f253c523541-serving-cert\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.946085 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869c67c3-005d-47f5-9dc7-9f253c523541-config\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.946274 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/869c67c3-005d-47f5-9dc7-9f253c523541-client-ca\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.946388 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s28g8\" (UniqueName: \"kubernetes.io/projected/869c67c3-005d-47f5-9dc7-9f253c523541-kube-api-access-s28g8\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.947495 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0d17836b-1dda-4b03-8417-7025a21b7f0f" (UID: "0d17836b-1dda-4b03-8417-7025a21b7f0f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:16:15 crc kubenswrapper[4699]: I0226 11:16:15.951619 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-kube-api-access-54vbq" (OuterVolumeSpecName: "kube-api-access-54vbq") pod "bcf1b992-7a07-4490-bf91-a0e2f802d6aa" (UID: "bcf1b992-7a07-4490-bf91-a0e2f802d6aa"). InnerVolumeSpecName "kube-api-access-54vbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.006499 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" event={"ID":"bcf1b992-7a07-4490-bf91-a0e2f802d6aa","Type":"ContainerDied","Data":"fc9c825381387b5b1c159e4f7a59a6327fb0accf80fe9e5362f7a82129337743"} Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.006558 4699 scope.go:117] "RemoveContainer" containerID="3bdc2b92ccb0248bc419987c54d581fc1cccfbe419459f4758200553b00ea095" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.006520 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.008255 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" event={"ID":"0d17836b-1dda-4b03-8417-7025a21b7f0f","Type":"ContainerDied","Data":"49b49b629531d2d3d10e7e6211aa14269de0c64c913e3b2b4346368fd08d5a2b"} Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.008344 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047156 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-config\") pod \"0d17836b-1dda-4b03-8417-7025a21b7f0f\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047248 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-client-ca\") pod \"0d17836b-1dda-4b03-8417-7025a21b7f0f\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047276 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-serving-cert\") pod \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047314 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d17836b-1dda-4b03-8417-7025a21b7f0f-serving-cert\") pod \"0d17836b-1dda-4b03-8417-7025a21b7f0f\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047348 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv9tf\" (UniqueName: \"kubernetes.io/projected/0d17836b-1dda-4b03-8417-7025a21b7f0f-kube-api-access-rv9tf\") pod \"0d17836b-1dda-4b03-8417-7025a21b7f0f\" (UID: \"0d17836b-1dda-4b03-8417-7025a21b7f0f\") " Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047384 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-config\") pod \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047421 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-client-ca\") pod \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\" (UID: \"bcf1b992-7a07-4490-bf91-a0e2f802d6aa\") " Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047699 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/869c67c3-005d-47f5-9dc7-9f253c523541-client-ca\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047790 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s28g8\" (UniqueName: \"kubernetes.io/projected/869c67c3-005d-47f5-9dc7-9f253c523541-kube-api-access-s28g8\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047842 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/869c67c3-005d-47f5-9dc7-9f253c523541-serving-cert\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047880 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d17836b-1dda-4b03-8417-7025a21b7f0f" (UID: "0d17836b-1dda-4b03-8417-7025a21b7f0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.047951 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869c67c3-005d-47f5-9dc7-9f253c523541-config\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.048018 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54vbq\" (UniqueName: \"kubernetes.io/projected/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-kube-api-access-54vbq\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.048036 4699 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.048051 4699 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.048087 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-config" (OuterVolumeSpecName: "config") pod "0d17836b-1dda-4b03-8417-7025a21b7f0f" (UID: "0d17836b-1dda-4b03-8417-7025a21b7f0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.049372 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-config" (OuterVolumeSpecName: "config") pod "bcf1b992-7a07-4490-bf91-a0e2f802d6aa" (UID: "bcf1b992-7a07-4490-bf91-a0e2f802d6aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.049422 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-client-ca" (OuterVolumeSpecName: "client-ca") pod "bcf1b992-7a07-4490-bf91-a0e2f802d6aa" (UID: "bcf1b992-7a07-4490-bf91-a0e2f802d6aa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.049798 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/869c67c3-005d-47f5-9dc7-9f253c523541-client-ca\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.050157 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869c67c3-005d-47f5-9dc7-9f253c523541-config\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.052143 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bcf1b992-7a07-4490-bf91-a0e2f802d6aa" (UID: "bcf1b992-7a07-4490-bf91-a0e2f802d6aa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.052325 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d17836b-1dda-4b03-8417-7025a21b7f0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d17836b-1dda-4b03-8417-7025a21b7f0f" (UID: "0d17836b-1dda-4b03-8417-7025a21b7f0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.052926 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d17836b-1dda-4b03-8417-7025a21b7f0f-kube-api-access-rv9tf" (OuterVolumeSpecName: "kube-api-access-rv9tf") pod "0d17836b-1dda-4b03-8417-7025a21b7f0f" (UID: "0d17836b-1dda-4b03-8417-7025a21b7f0f"). InnerVolumeSpecName "kube-api-access-rv9tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.053230 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/869c67c3-005d-47f5-9dc7-9f253c523541-serving-cert\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.068309 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s28g8\" (UniqueName: \"kubernetes.io/projected/869c67c3-005d-47f5-9dc7-9f253c523541-kube-api-access-s28g8\") pod \"route-controller-manager-66fb947f99-29sbz\" (UID: \"869c67c3-005d-47f5-9dc7-9f253c523541\") " pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.149833 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d17836b-1dda-4b03-8417-7025a21b7f0f-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.149882 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.149893 4699 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d17836b-1dda-4b03-8417-7025a21b7f0f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.149902 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv9tf\" (UniqueName: \"kubernetes.io/projected/0d17836b-1dda-4b03-8417-7025a21b7f0f-kube-api-access-rv9tf\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.149912 4699 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.149920 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf1b992-7a07-4490-bf91-a0e2f802d6aa-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.214548 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.668134 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk"] Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.677599 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75fbbf96d5-6r6lk"] Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.682518 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt"] Feb 26 11:16:16 crc kubenswrapper[4699]: I0226 11:16:16.687610 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dd48cdbf5-ckczt"] Feb 26 11:16:17 crc kubenswrapper[4699]: I0226 11:16:17.741906 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f54c45747-bbg8s"] Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.109215 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx"] Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.110035 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.115771 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.117361 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.118621 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.120495 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.124412 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx"] Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.124641 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.125677 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.129006 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.173009 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xm8v\" (UniqueName: \"kubernetes.io/projected/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-kube-api-access-8xm8v\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.173224 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-proxy-ca-bundles\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.173406 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-client-ca\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.173488 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-serving-cert\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.173539 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-config\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.229430 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.271098 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d17836b-1dda-4b03-8417-7025a21b7f0f" path="/var/lib/kubelet/pods/0d17836b-1dda-4b03-8417-7025a21b7f0f/volumes" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.271965 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcf1b992-7a07-4490-bf91-a0e2f802d6aa" path="/var/lib/kubelet/pods/bcf1b992-7a07-4490-bf91-a0e2f802d6aa/volumes" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.274775 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-client-ca\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.274833 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-serving-cert\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.274879 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-config\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.275047 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xm8v\" (UniqueName: \"kubernetes.io/projected/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-kube-api-access-8xm8v\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.275601 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-proxy-ca-bundles\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.276611 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-client-ca\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.319415 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-serving-cert\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.319837 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-proxy-ca-bundles\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.319606 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-config\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.322640 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xm8v\" (UniqueName: \"kubernetes.io/projected/92e41a97-a913-4bed-87e8-1d3f55e0aa1a-kube-api-access-8xm8v\") pod \"controller-manager-6cbf55bfdf-xlnsx\" (UID: \"92e41a97-a913-4bed-87e8-1d3f55e0aa1a\") " pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.338950 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.459603 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:18 crc kubenswrapper[4699]: I0226 11:16:18.471399 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fhgnz"] Feb 26 11:16:19 crc kubenswrapper[4699]: I0226 11:16:19.288946 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tcnxt" Feb 26 11:16:19 crc kubenswrapper[4699]: I0226 11:16:19.995103 4699 scope.go:117] "RemoveContainer" containerID="24a7d1423c70d2629cc84a6d309405af35280dc1c8f65cb9cbedd726c5936739" Feb 26 11:16:20 crc kubenswrapper[4699]: I0226 11:16:20.036165 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" event={"ID":"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a","Type":"ContainerStarted","Data":"3d6fd9756c134f401a126d34c967c09686961803acbfb7150e119a16e1b25167"} Feb 26 11:16:20 crc kubenswrapper[4699]: I0226 11:16:20.036430 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fhgnz" podUID="1389c8c4-9546-4193-8067-50db90448d4f" containerName="registry-server" containerID="cri-o://b8eedef066fa8aaa6df130360e7ae91b6c35c65386cf0e1eb331ae24b87e6305" gracePeriod=2 Feb 26 11:16:20 crc kubenswrapper[4699]: I0226 11:16:20.605464 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz"] Feb 26 11:16:20 crc kubenswrapper[4699]: I0226 11:16:20.711452 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx"] Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.064703 4699 generic.go:334] "Generic (PLEG): container finished" podID="1389c8c4-9546-4193-8067-50db90448d4f" containerID="b8eedef066fa8aaa6df130360e7ae91b6c35c65386cf0e1eb331ae24b87e6305" exitCode=0 Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.065025 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhgnz" event={"ID":"1389c8c4-9546-4193-8067-50db90448d4f","Type":"ContainerDied","Data":"b8eedef066fa8aaa6df130360e7ae91b6c35c65386cf0e1eb331ae24b87e6305"} Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.065059 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhgnz" event={"ID":"1389c8c4-9546-4193-8067-50db90448d4f","Type":"ContainerDied","Data":"f819282439bed9f63847862724adeef5b7b347bc240dc1336ee32c15da7bf7cc"} Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.065075 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f819282439bed9f63847862724adeef5b7b347bc240dc1336ee32c15da7bf7cc" Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.066619 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" event={"ID":"92e41a97-a913-4bed-87e8-1d3f55e0aa1a","Type":"ContainerStarted","Data":"82ebdc1d3bcdfe014c120b07518e64401ba8d256cb93f01f647bf2ee46fa985c"} Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.068779 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" event={"ID":"869c67c3-005d-47f5-9dc7-9f253c523541","Type":"ContainerStarted","Data":"5b93c6ed5a55a83728c711bd95b5222e62a3d5d60fd870e0d69e6c3896a973c9"} Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.071157 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc9c6" event={"ID":"44d171ad-7d92-4c70-a686-65f60ded8a03","Type":"ContainerStarted","Data":"d27dda8ede66374aa47b77a60b930fa0b6c4e065e9c9b269dc3e8dd85fa02ece"} Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.075515 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8kpz" event={"ID":"8c96a703-e568-4916-8035-a951ae91dc2b","Type":"ContainerStarted","Data":"7480103b052e67e1c14af93c5ed9ab5b5c3150d0a1dbb5d35641a39bc2cc9515"} Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.078199 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzgjj" event={"ID":"71a83978-4f86-404b-967a-0e7493ff6721","Type":"ContainerStarted","Data":"c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a"} Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.080332 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czwkc" event={"ID":"ac0026c3-1fad-4b34-9c42-389971f0c773","Type":"ContainerStarted","Data":"919888fa21cfe39704e1b0c864c73cd7cdeeac94e5ee1bb4c79246202be61323"} Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.082390 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhbz" event={"ID":"9ea10063-7888-400e-af1c-216cbde5a13e","Type":"ContainerStarted","Data":"c429ee05cb01901447a5e3bded424d4a0427e987ffd209a1f29754bcb9be9b4d"} Feb 26 11:16:21 crc kubenswrapper[4699]: I0226 11:16:21.313883 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mzgjj" podStartSLOduration=5.321535885 podStartE2EDuration="2m14.313850579s" podCreationTimestamp="2026-02-26 11:14:07 +0000 UTC" firstStartedPulling="2026-02-26 11:14:11.064536677 +0000 UTC m=+196.875363111" lastFinishedPulling="2026-02-26 11:16:20.056851371 +0000 UTC m=+325.867677805" observedRunningTime="2026-02-26 11:16:21.259452287 +0000 UTC m=+327.070278721" watchObservedRunningTime="2026-02-26 11:16:21.313850579 +0000 UTC m=+327.124677023" Feb 26 11:16:22 crc kubenswrapper[4699]: I0226 11:16:22.258891 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrk4n" event={"ID":"6e7ddf51-5522-4085-8567-76c9a254ed15","Type":"ContainerStarted","Data":"e63934f65b729d4f1b8b668dbe9b4795f057f647c6b7a160c5e82634ad1de5fd"} Feb 26 11:16:22 crc kubenswrapper[4699]: I0226 11:16:22.374630 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:16:22 crc kubenswrapper[4699]: I0226 11:16:22.711894 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-utilities\") pod \"1389c8c4-9546-4193-8067-50db90448d4f\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " Feb 26 11:16:22 crc kubenswrapper[4699]: I0226 11:16:22.711986 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hnhh\" (UniqueName: \"kubernetes.io/projected/1389c8c4-9546-4193-8067-50db90448d4f-kube-api-access-8hnhh\") pod \"1389c8c4-9546-4193-8067-50db90448d4f\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " Feb 26 11:16:22 crc kubenswrapper[4699]: I0226 11:16:22.712035 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-catalog-content\") pod \"1389c8c4-9546-4193-8067-50db90448d4f\" (UID: \"1389c8c4-9546-4193-8067-50db90448d4f\") " Feb 26 11:16:22 crc kubenswrapper[4699]: I0226 11:16:22.712651 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-utilities" (OuterVolumeSpecName: "utilities") pod "1389c8c4-9546-4193-8067-50db90448d4f" (UID: "1389c8c4-9546-4193-8067-50db90448d4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:16:22 crc kubenswrapper[4699]: I0226 11:16:22.728296 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1389c8c4-9546-4193-8067-50db90448d4f-kube-api-access-8hnhh" (OuterVolumeSpecName: "kube-api-access-8hnhh") pod "1389c8c4-9546-4193-8067-50db90448d4f" (UID: "1389c8c4-9546-4193-8067-50db90448d4f"). InnerVolumeSpecName "kube-api-access-8hnhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:16:22 crc kubenswrapper[4699]: I0226 11:16:22.791107 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1389c8c4-9546-4193-8067-50db90448d4f" (UID: "1389c8c4-9546-4193-8067-50db90448d4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:16:22 crc kubenswrapper[4699]: I0226 11:16:22.813889 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:22 crc kubenswrapper[4699]: I0226 11:16:22.813938 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hnhh\" (UniqueName: \"kubernetes.io/projected/1389c8c4-9546-4193-8067-50db90448d4f-kube-api-access-8hnhh\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:22 crc kubenswrapper[4699]: I0226 11:16:22.813950 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1389c8c4-9546-4193-8067-50db90448d4f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:23 crc kubenswrapper[4699]: I0226 11:16:23.477512 4699 generic.go:334] "Generic (PLEG): container finished" podID="8c96a703-e568-4916-8035-a951ae91dc2b" containerID="7480103b052e67e1c14af93c5ed9ab5b5c3150d0a1dbb5d35641a39bc2cc9515" exitCode=0 Feb 26 11:16:23 crc kubenswrapper[4699]: I0226 11:16:23.477579 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8kpz" event={"ID":"8c96a703-e568-4916-8035-a951ae91dc2b","Type":"ContainerDied","Data":"7480103b052e67e1c14af93c5ed9ab5b5c3150d0a1dbb5d35641a39bc2cc9515"} Feb 26 11:16:23 crc kubenswrapper[4699]: I0226 11:16:23.482142 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhgks" event={"ID":"6b9da973-6b5f-4485-adca-8792b0a3d256","Type":"ContainerStarted","Data":"7d653e44fd8d815b615ce9635176302fd8a0ad6d3f93420c0c7d85da3992bebc"} Feb 26 11:16:23 crc kubenswrapper[4699]: I0226 11:16:23.487793 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhgnz" Feb 26 11:16:23 crc kubenswrapper[4699]: I0226 11:16:23.487980 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" event={"ID":"92e41a97-a913-4bed-87e8-1d3f55e0aa1a","Type":"ContainerStarted","Data":"6acfdb78f4a78fdad4245053d38c321d68fa1332b885aac9d01d5287f64dbb26"} Feb 26 11:16:23 crc kubenswrapper[4699]: I0226 11:16:23.489176 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:23 crc kubenswrapper[4699]: I0226 11:16:23.490842 4699 patch_prober.go:28] interesting pod/controller-manager-6cbf55bfdf-xlnsx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Feb 26 11:16:23 crc kubenswrapper[4699]: I0226 11:16:23.490890 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" podUID="92e41a97-a913-4bed-87e8-1d3f55e0aa1a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Feb 26 11:16:23 crc kubenswrapper[4699]: I0226 11:16:23.605053 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" podStartSLOduration=15.605026387 podStartE2EDuration="15.605026387s" podCreationTimestamp="2026-02-26 11:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:16:23.600826728 +0000 UTC m=+329.411653182" watchObservedRunningTime="2026-02-26 11:16:23.605026387 +0000 UTC m=+329.415852831" Feb 26 11:16:25 crc kubenswrapper[4699]: E0226 11:16:25.514583 4699 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.254s" Feb 26 11:16:25 crc kubenswrapper[4699]: I0226 11:16:25.531406 4699 generic.go:334] "Generic (PLEG): container finished" podID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerID="919888fa21cfe39704e1b0c864c73cd7cdeeac94e5ee1bb4c79246202be61323" exitCode=0 Feb 26 11:16:25 crc kubenswrapper[4699]: I0226 11:16:25.531528 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czwkc" event={"ID":"ac0026c3-1fad-4b34-9c42-389971f0c773","Type":"ContainerDied","Data":"919888fa21cfe39704e1b0c864c73cd7cdeeac94e5ee1bb4c79246202be61323"} Feb 26 11:16:25 crc kubenswrapper[4699]: I0226 11:16:25.536753 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" event={"ID":"869c67c3-005d-47f5-9dc7-9f253c523541","Type":"ContainerStarted","Data":"07b45cc6373fa8988168f1cc8f5d8abc09ef0f4b1efdd7c69a6a543665bba754"} Feb 26 11:16:25 crc kubenswrapper[4699]: I0226 11:16:25.541334 4699 generic.go:334] "Generic (PLEG): container finished" podID="6e7ddf51-5522-4085-8567-76c9a254ed15" containerID="e63934f65b729d4f1b8b668dbe9b4795f057f647c6b7a160c5e82634ad1de5fd" exitCode=0 Feb 26 11:16:25 crc kubenswrapper[4699]: I0226 11:16:25.541412 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrk4n" event={"ID":"6e7ddf51-5522-4085-8567-76c9a254ed15","Type":"ContainerDied","Data":"e63934f65b729d4f1b8b668dbe9b4795f057f647c6b7a160c5e82634ad1de5fd"} Feb 26 11:16:25 crc kubenswrapper[4699]: I0226 11:16:25.547425 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" event={"ID":"c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a","Type":"ContainerStarted","Data":"2ce288c961079d54b5964b5eb4891bde97dbf19c66bf3a9774202810b5d5b79a"} Feb 26 11:16:25 crc kubenswrapper[4699]: I0226 11:16:25.547461 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:25 crc kubenswrapper[4699]: I0226 11:16:25.562289 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cbf55bfdf-xlnsx" Feb 26 11:16:25 crc kubenswrapper[4699]: I0226 11:16:25.810909 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" podStartSLOduration=57.810888891 podStartE2EDuration="57.810888891s" podCreationTimestamp="2026-02-26 11:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:16:25.808959986 +0000 UTC m=+331.619786430" watchObservedRunningTime="2026-02-26 11:16:25.810888891 +0000 UTC m=+331.621715335" Feb 26 11:16:25 crc kubenswrapper[4699]: I0226 11:16:25.888706 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fhgnz"] Feb 26 11:16:25 crc kubenswrapper[4699]: I0226 11:16:25.896556 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fhgnz"] Feb 26 11:16:26 crc kubenswrapper[4699]: I0226 11:16:26.440093 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1389c8c4-9546-4193-8067-50db90448d4f" path="/var/lib/kubelet/pods/1389c8c4-9546-4193-8067-50db90448d4f/volumes" Feb 26 11:16:26 crc kubenswrapper[4699]: I0226 11:16:26.477015 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" Feb 26 11:16:26 crc kubenswrapper[4699]: I0226 11:16:26.616569 4699 generic.go:334] "Generic (PLEG): container finished" podID="9ea10063-7888-400e-af1c-216cbde5a13e" containerID="c429ee05cb01901447a5e3bded424d4a0427e987ffd209a1f29754bcb9be9b4d" exitCode=0 Feb 26 11:16:26 crc kubenswrapper[4699]: I0226 11:16:26.618315 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhbz" event={"ID":"9ea10063-7888-400e-af1c-216cbde5a13e","Type":"ContainerDied","Data":"c429ee05cb01901447a5e3bded424d4a0427e987ffd209a1f29754bcb9be9b4d"} Feb 26 11:16:26 crc kubenswrapper[4699]: I0226 11:16:26.621417 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:26 crc kubenswrapper[4699]: I0226 11:16:26.948598 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" Feb 26 11:16:27 crc kubenswrapper[4699]: I0226 11:16:27.219219 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66fb947f99-29sbz" podStartSLOduration=19.219191128 podStartE2EDuration="19.219191128s" podCreationTimestamp="2026-02-26 11:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:16:26.849234671 +0000 UTC m=+332.660061115" watchObservedRunningTime="2026-02-26 11:16:27.219191128 +0000 UTC m=+333.030017562" Feb 26 11:16:27 crc kubenswrapper[4699]: I0226 11:16:27.609020 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:16:27 crc kubenswrapper[4699]: I0226 11:16:27.609622 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:16:27 crc kubenswrapper[4699]: I0226 11:16:27.968379 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" event={"ID":"0d9d78c8-4193-47a8-9ed9-208f6dc25831","Type":"ContainerStarted","Data":"000757444f955626a5cade194e8afdfce85b9f484def8b4bc1703641245c47c3"} Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:27.995528 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" podStartSLOduration=13.250882132 podStartE2EDuration="27.995505611s" podCreationTimestamp="2026-02-26 11:16:00 +0000 UTC" firstStartedPulling="2026-02-26 11:16:01.047136633 +0000 UTC m=+306.857963067" lastFinishedPulling="2026-02-26 11:16:15.791760112 +0000 UTC m=+321.602586546" observedRunningTime="2026-02-26 11:16:27.992576757 +0000 UTC m=+333.803403211" watchObservedRunningTime="2026-02-26 11:16:27.995505611 +0000 UTC m=+333.806332045" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.181802 4699 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.182824 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1389c8c4-9546-4193-8067-50db90448d4f" containerName="extract-utilities" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.182859 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="1389c8c4-9546-4193-8067-50db90448d4f" containerName="extract-utilities" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.182897 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1389c8c4-9546-4193-8067-50db90448d4f" containerName="extract-content" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.182911 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="1389c8c4-9546-4193-8067-50db90448d4f" containerName="extract-content" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.182932 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1389c8c4-9546-4193-8067-50db90448d4f" containerName="registry-server" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.182954 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="1389c8c4-9546-4193-8067-50db90448d4f" containerName="registry-server" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.183471 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="1389c8c4-9546-4193-8067-50db90448d4f" containerName="registry-server" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.186592 4699 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.191340 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886" gracePeriod=15 Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.192378 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec" gracePeriod=15 Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.192779 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6" gracePeriod=15 Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.193727 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356" gracePeriod=15 Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.204095 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e" gracePeriod=15 Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.212476 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.213856 4699 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.216315 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216349 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.216364 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216374 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.216385 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216393 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.216402 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216409 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.216424 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216431 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.216440 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216467 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.216486 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216494 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.216512 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216521 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216719 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216739 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216754 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216770 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216779 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216787 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216799 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.216807 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.218967 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.219016 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.219046 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.219062 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.220174 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.242563 4699 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 26 11:16:28 crc kubenswrapper[4699]: E0226 11:16:28.274082 4699 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.382684 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.383335 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.383395 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.383447 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.383476 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.383548 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.383613 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.383763 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.485593 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.485837 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.485881 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.485907 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.485935 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.485984 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.486033 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.486083 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.486250 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.486339 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.486387 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.486422 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.486445 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.486482 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.486514 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.486550 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.574946 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.982044 4699 generic.go:334] "Generic (PLEG): container finished" podID="a904aa73-23d7-4994-882a-4afafe02fb82" containerID="3fd7c462ca5ff5bad3835e30c617b483507570293909f8328947b3fbdead2389" exitCode=0 Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.982200 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a904aa73-23d7-4994-882a-4afafe02fb82","Type":"ContainerDied","Data":"3fd7c462ca5ff5bad3835e30c617b483507570293909f8328947b3fbdead2389"} Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.983403 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.987387 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.991156 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.992531 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886" exitCode=0 Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.992567 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec" exitCode=0 Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.992578 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6" exitCode=0 Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.992592 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356" exitCode=2 Feb 26 11:16:28 crc kubenswrapper[4699]: I0226 11:16:28.994483 4699 scope.go:117] "RemoveContainer" containerID="eb572790b6587d3d0bd02995a8bc9efc2f98a626a79ac288ee6925e50809fcf0" Feb 26 11:16:29 crc kubenswrapper[4699]: I0226 11:16:29.013461 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mzgjj" podUID="71a83978-4f86-404b-967a-0e7493ff6721" containerName="registry-server" probeResult="failure" output=< Feb 26 11:16:29 crc kubenswrapper[4699]: timeout: failed to connect service ":50051" within 1s Feb 26 11:16:29 crc kubenswrapper[4699]: > Feb 26 11:16:29 crc kubenswrapper[4699]: E0226 11:16:29.014198 4699 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.213:6443: connect: connection refused" event=< Feb 26 11:16:29 crc kubenswrapper[4699]: &Event{ObjectMeta:{certified-operators-mzgjj.1897c7bc14fd9270 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-mzgjj,UID:71a83978-4f86-404b-967a-0e7493ff6721,APIVersion:v1,ResourceVersion:28011,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Startup probe failed: timeout: failed to connect service ":50051" within 1s Feb 26 11:16:29 crc kubenswrapper[4699]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:16:29.013521008 +0000 UTC m=+334.824347442,LastTimestamp:2026-02-26 11:16:29.013521008 +0000 UTC m=+334.824347442,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:16:29 crc kubenswrapper[4699]: > Feb 26 11:16:30 crc kubenswrapper[4699]: I0226 11:16:30.001671 4699 generic.go:334] "Generic (PLEG): container finished" podID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" containerID="000757444f955626a5cade194e8afdfce85b9f484def8b4bc1703641245c47c3" exitCode=0 Feb 26 11:16:30 crc kubenswrapper[4699]: I0226 11:16:30.001853 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" event={"ID":"0d9d78c8-4193-47a8-9ed9-208f6dc25831","Type":"ContainerDied","Data":"000757444f955626a5cade194e8afdfce85b9f484def8b4bc1703641245c47c3"} Feb 26 11:16:30 crc kubenswrapper[4699]: I0226 11:16:30.005199 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:30 crc kubenswrapper[4699]: I0226 11:16:30.005803 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:30 crc kubenswrapper[4699]: E0226 11:16:30.953714 4699 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b9da973_6b5f_4485_adca_8792b0a3d256.slice/crio-conmon-7d653e44fd8d815b615ce9635176302fd8a0ad6d3f93420c0c7d85da3992bebc.scope\": RecentStats: unable to find data in memory cache]" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.015655 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.018202 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e" exitCode=0 Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.022226 4699 generic.go:334] "Generic (PLEG): container finished" podID="6b9da973-6b5f-4485-adca-8792b0a3d256" containerID="7d653e44fd8d815b615ce9635176302fd8a0ad6d3f93420c0c7d85da3992bebc" exitCode=0 Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.022345 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhgks" event={"ID":"6b9da973-6b5f-4485-adca-8792b0a3d256","Type":"ContainerDied","Data":"7d653e44fd8d815b615ce9635176302fd8a0ad6d3f93420c0c7d85da3992bebc"} Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.023313 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.023676 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.023669 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fed17fa17d1e38cfae5f233da9ff311539827646f8cade76c9ff17fc397c01f8"} Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.024315 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.025340 4699 generic.go:334] "Generic (PLEG): container finished" podID="44d171ad-7d92-4c70-a686-65f60ded8a03" containerID="d27dda8ede66374aa47b77a60b930fa0b6c4e065e9c9b269dc3e8dd85fa02ece" exitCode=0 Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.025424 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc9c6" event={"ID":"44d171ad-7d92-4c70-a686-65f60ded8a03","Type":"ContainerDied","Data":"d27dda8ede66374aa47b77a60b930fa0b6c4e065e9c9b269dc3e8dd85fa02ece"} Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.026410 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.026746 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.026991 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.027334 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a904aa73-23d7-4994-882a-4afafe02fb82","Type":"ContainerDied","Data":"c6277d115f7a8e7d06e98e6fbf746a8f5f67a2bf9660b521fc6a925c224a7f1a"} Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.027351 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.027414 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6277d115f7a8e7d06e98e6fbf746a8f5f67a2bf9660b521fc6a925c224a7f1a" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.109700 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.110530 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.111035 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.111273 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.111556 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.151196 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-kubelet-dir\") pod \"a904aa73-23d7-4994-882a-4afafe02fb82\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.151258 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a904aa73-23d7-4994-882a-4afafe02fb82-kube-api-access\") pod \"a904aa73-23d7-4994-882a-4afafe02fb82\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.151294 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-var-lock\") pod \"a904aa73-23d7-4994-882a-4afafe02fb82\" (UID: \"a904aa73-23d7-4994-882a-4afafe02fb82\") " Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.151295 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a904aa73-23d7-4994-882a-4afafe02fb82" (UID: "a904aa73-23d7-4994-882a-4afafe02fb82"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.151530 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-var-lock" (OuterVolumeSpecName: "var-lock") pod "a904aa73-23d7-4994-882a-4afafe02fb82" (UID: "a904aa73-23d7-4994-882a-4afafe02fb82"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.151957 4699 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.151980 4699 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a904aa73-23d7-4994-882a-4afafe02fb82-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.158493 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a904aa73-23d7-4994-882a-4afafe02fb82-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a904aa73-23d7-4994-882a-4afafe02fb82" (UID: "a904aa73-23d7-4994-882a-4afafe02fb82"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.253588 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a904aa73-23d7-4994-882a-4afafe02fb82-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.369782 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.379874 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.380888 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.381296 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.381735 4699 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.381941 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.382170 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.557583 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.557667 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.557786 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.558247 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.558286 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.558303 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.659872 4699 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.659923 4699 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.659936 4699 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.697141 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.697741 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.698236 4699 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.699428 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.701520 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.702088 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.862040 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knl5z\" (UniqueName: \"kubernetes.io/projected/0d9d78c8-4193-47a8-9ed9-208f6dc25831-kube-api-access-knl5z\") pod \"0d9d78c8-4193-47a8-9ed9-208f6dc25831\" (UID: \"0d9d78c8-4193-47a8-9ed9-208f6dc25831\") " Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.873596 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9d78c8-4193-47a8-9ed9-208f6dc25831-kube-api-access-knl5z" (OuterVolumeSpecName: "kube-api-access-knl5z") pod "0d9d78c8-4193-47a8-9ed9-208f6dc25831" (UID: "0d9d78c8-4193-47a8-9ed9-208f6dc25831"). InnerVolumeSpecName "kube-api-access-knl5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:16:31 crc kubenswrapper[4699]: I0226 11:16:31.964959 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knl5z\" (UniqueName: \"kubernetes.io/projected/0d9d78c8-4193-47a8-9ed9-208f6dc25831-kube-api-access-knl5z\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.040586 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhbz" event={"ID":"9ea10063-7888-400e-af1c-216cbde5a13e","Type":"ContainerStarted","Data":"4459df84e7aab7535bf4732238c87c4da5222e3237b69439fff20886a1ea7688"} Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.042220 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.042477 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.042791 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.043266 4699 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.043542 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.044048 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.049996 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.058745 4699 scope.go:117] "RemoveContainer" containerID="b29557dbd4105e597cfdc9d373556a23fe5f57a9d5ebf3c8bf9db2cd72c81886" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.058809 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.064735 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czwkc" event={"ID":"ac0026c3-1fad-4b34-9c42-389971f0c773","Type":"ContainerStarted","Data":"b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca"} Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.066034 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.066239 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.066442 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.066624 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.066774 4699 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.066924 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.067423 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.070069 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"faaf0aadacd79051543cdb9cfcd026bfc89d0e173e7f3faae8b64f52a92a3ab3"} Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.070533 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: E0226 11:16:32.070659 4699 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.070775 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.072461 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.073527 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.073937 4699 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.074296 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.074751 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.076546 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrk4n" event={"ID":"6e7ddf51-5522-4085-8567-76c9a254ed15","Type":"ContainerStarted","Data":"aa39a04716f8cdcc694931265437b30c1cb1c3615ab11016472ce3e95c18688b"} Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.077238 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.077617 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.077994 4699 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.078338 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.078775 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.079247 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.079639 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.079846 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.081385 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.081374 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" event={"ID":"0d9d78c8-4193-47a8-9ed9-208f6dc25831","Type":"ContainerDied","Data":"d9a56a2a86268af382b046874040445b48c2975a953e5d204ab0a77f6c325fdc"} Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.081513 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9a56a2a86268af382b046874040445b48c2975a953e5d204ab0a77f6c325fdc" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.095353 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.098174 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.098878 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.105323 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.105869 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.106409 4699 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.106850 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.107284 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.107649 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.107777 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8kpz" event={"ID":"8c96a703-e568-4916-8035-a951ae91dc2b","Type":"ContainerStarted","Data":"bca562e2d2fabc5097841d6398d6c6b6a6779605566f4dd3173111ee1e8c04f3"} Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.111253 4699 scope.go:117] "RemoveContainer" containerID="ca0f1499c0047da376575db5cd438b18458dd638860a008df3b8341f2658d9ec" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.117514 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.117861 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.122529 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.124585 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.124818 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.125045 4699 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.125285 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.125508 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.125730 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.127176 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.129320 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.129581 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.129808 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.130099 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.130369 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.130635 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.130889 4699 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.131260 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.179478 4699 scope.go:117] "RemoveContainer" containerID="9756b395ca3363066f10f261108f31b35a960097dfd0720250c5dad7e95cf7f6" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.190315 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.190731 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.191161 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.191359 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.191569 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.191921 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.195469 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.196010 4699 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.196445 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.208572 4699 scope.go:117] "RemoveContainer" containerID="084a46a1304a8c639e21ab541eedf94adf1f3bd63960a1699426ebc68cddd356" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.234492 4699 scope.go:117] "RemoveContainer" containerID="9324369466fb6495b3f10e66136a5aaf920f50540ab5452b235060a1e229769e" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.271064 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 26 11:16:32 crc kubenswrapper[4699]: I0226 11:16:32.272089 4699 scope.go:117] "RemoveContainer" containerID="2490762714e27b6a745ba229aafbeca2b808eed346d6bd4de519a72944055035" Feb 26 11:16:32 crc kubenswrapper[4699]: E0226 11:16:32.912424 4699 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.213:6443: connect: connection refused" event=< Feb 26 11:16:32 crc kubenswrapper[4699]: &Event{ObjectMeta:{certified-operators-mzgjj.1897c7bc14fd9270 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-mzgjj,UID:71a83978-4f86-404b-967a-0e7493ff6721,APIVersion:v1,ResourceVersion:28011,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Startup probe failed: timeout: failed to connect service ":50051" within 1s Feb 26 11:16:32 crc kubenswrapper[4699]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:16:29.013521008 +0000 UTC m=+334.824347442,LastTimestamp:2026-02-26 11:16:29.013521008 +0000 UTC m=+334.824347442,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:16:32 crc kubenswrapper[4699]: > Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.127248 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc9c6" event={"ID":"44d171ad-7d92-4c70-a686-65f60ded8a03","Type":"ContainerStarted","Data":"704c8ba25f50ac5c881bb9d05eb872ee3851c9a21d28c2acf7a27a400acbebe0"} Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.128751 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.129840 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.130704 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.132150 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.133255 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.133710 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.134433 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.135583 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.142283 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhgks" event={"ID":"6b9da973-6b5f-4485-adca-8792b0a3d256","Type":"ContainerStarted","Data":"58ec65080ac68341b08f4272194fe62d85383a27766f002151749856e7c508e7"} Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.146351 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: E0226 11:16:33.146417 4699 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.146774 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.147381 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.151028 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.154349 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.155684 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.156740 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:33 crc kubenswrapper[4699]: I0226 11:16:33.158017 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:34 crc kubenswrapper[4699]: I0226 11:16:34.406249 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:16:34 crc kubenswrapper[4699]: I0226 11:16:34.406334 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:16:34 crc kubenswrapper[4699]: I0226 11:16:34.406373 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:16:34 crc kubenswrapper[4699]: I0226 11:16:34.406412 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:16:34 crc kubenswrapper[4699]: I0226 11:16:34.415210 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:16:34 crc kubenswrapper[4699]: I0226 11:16:34.415292 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:16:34 crc kubenswrapper[4699]: I0226 11:16:34.415897 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:16:34 crc kubenswrapper[4699]: I0226 11:16:34.447372 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:16:34 crc kubenswrapper[4699]: I0226 11:16:34.462588 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:16:34 crc kubenswrapper[4699]: I0226 11:16:34.466074 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:16:34 crc kubenswrapper[4699]: I0226 11:16:34.561246 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.229307 4699 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 26 11:16:35 crc kubenswrapper[4699]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150" Netns:"/var/run/netns/133c11d8-7a50-4785-9ef4-18402b3d7555" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:16:35 crc kubenswrapper[4699]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 11:16:35 crc kubenswrapper[4699]: > Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.229654 4699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 26 11:16:35 crc kubenswrapper[4699]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150" Netns:"/var/run/netns/133c11d8-7a50-4785-9ef4-18402b3d7555" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:16:35 crc kubenswrapper[4699]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 11:16:35 crc kubenswrapper[4699]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.229676 4699 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 26 11:16:35 crc kubenswrapper[4699]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150" Netns:"/var/run/netns/133c11d8-7a50-4785-9ef4-18402b3d7555" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:16:35 crc kubenswrapper[4699]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 11:16:35 crc kubenswrapper[4699]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.229735 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150\\\" Netns:\\\"/var/run/netns/133c11d8-7a50-4785-9ef4-18402b3d7555\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=4b9c64eccc53ca6b1d5079baf6d1ddc69de002cf2d24471b40f9a87fa976f150;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s\\\": dial tcp 38.102.83.213:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.447457 4699 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 26 11:16:35 crc kubenswrapper[4699]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae" Netns:"/var/run/netns/2766d39c-b661-4580-8fc6-ec348c624ecd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:16:35 crc kubenswrapper[4699]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 11:16:35 crc kubenswrapper[4699]: > Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.447547 4699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 26 11:16:35 crc kubenswrapper[4699]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae" Netns:"/var/run/netns/2766d39c-b661-4580-8fc6-ec348c624ecd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:16:35 crc kubenswrapper[4699]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 11:16:35 crc kubenswrapper[4699]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.447568 4699 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 26 11:16:35 crc kubenswrapper[4699]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae" Netns:"/var/run/netns/2766d39c-b661-4580-8fc6-ec348c624ecd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:16:35 crc kubenswrapper[4699]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 11:16:35 crc kubenswrapper[4699]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.447628 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae\\\" Netns:\\\"/var/run/netns/2766d39c-b661-4580-8fc6-ec348c624ecd\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=4aeb54c841185c5491db25eb2ef7e833fb793475dad5f13ba58008593b2e6bae;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s\\\": dial tcp 38.102.83.213:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.530140 4699 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 26 11:16:35 crc kubenswrapper[4699]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096" Netns:"/var/run/netns/fceeb3d9-4f37-4878-b6c0-5410f26fe14e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:16:35 crc kubenswrapper[4699]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 11:16:35 crc kubenswrapper[4699]: > Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.530226 4699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 26 11:16:35 crc kubenswrapper[4699]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096" Netns:"/var/run/netns/fceeb3d9-4f37-4878-b6c0-5410f26fe14e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:16:35 crc kubenswrapper[4699]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 11:16:35 crc kubenswrapper[4699]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.530249 4699 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 26 11:16:35 crc kubenswrapper[4699]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096" Netns:"/var/run/netns/fceeb3d9-4f37-4878-b6c0-5410f26fe14e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.213:6443: connect: connection refused Feb 26 11:16:35 crc kubenswrapper[4699]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 11:16:35 crc kubenswrapper[4699]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:16:35 crc kubenswrapper[4699]: E0226 11:16:35.530352 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096\\\" Netns:\\\"/var/run/netns/fceeb3d9-4f37-4878-b6c0-5410f26fe14e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=096a117dc4643f6623dddd247c83762dc7b1f2ccba0b0277af31a471398f3096;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s\\\": dial tcp 38.102.83.213:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:16:36 crc kubenswrapper[4699]: I0226 11:16:36.263694 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: I0226 11:16:36.264149 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: I0226 11:16:36.264758 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: I0226 11:16:36.265163 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: I0226 11:16:36.265646 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: I0226 11:16:36.265908 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: I0226 11:16:36.266211 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: I0226 11:16:36.266537 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: E0226 11:16:36.505340 4699 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: E0226 11:16:36.506443 4699 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: E0226 11:16:36.507205 4699 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: E0226 11:16:36.507537 4699 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: E0226 11:16:36.507802 4699 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:36 crc kubenswrapper[4699]: I0226 11:16:36.507826 4699 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 26 11:16:36 crc kubenswrapper[4699]: E0226 11:16:36.508238 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="200ms" Feb 26 11:16:36 crc kubenswrapper[4699]: E0226 11:16:36.708653 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="400ms" Feb 26 11:16:37 crc kubenswrapper[4699]: E0226 11:16:37.109921 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="800ms" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.632096 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.632587 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.632981 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.633566 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.633863 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.634206 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.634554 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.634880 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.635198 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.635568 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.681437 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.681508 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.684369 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.685435 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.685940 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.686435 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.695423 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.696081 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.696620 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.697183 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.698389 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.698713 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.762057 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.762726 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.763211 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.763562 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.763912 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.764193 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.764464 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.764768 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.765025 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.765296 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: E0226 11:16:37.910872 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="1.6s" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.938192 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.938240 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.986842 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.987336 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.987709 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.988185 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.988431 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.988723 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.988998 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.989235 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.989517 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:37 crc kubenswrapper[4699]: I0226 11:16:37.989827 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.224623 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.225808 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.226657 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.227090 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.227387 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.227502 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.227764 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.228306 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.228558 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.228927 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.229333 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.229761 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.230097 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.230375 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.230603 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.230987 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.231368 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.231725 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.232292 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:38 crc kubenswrapper[4699]: I0226 11:16:38.232595 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.260136 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.260719 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.261380 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.261664 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.261905 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.262625 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.263174 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.263455 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.263725 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.264190 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.274632 4699 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee33485d-044d-4356-a626-df5e4625a4f2" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.274682 4699 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee33485d-044d-4356-a626-df5e4625a4f2" Feb 26 11:16:39 crc kubenswrapper[4699]: E0226 11:16:39.275207 4699 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.275585 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:39 crc kubenswrapper[4699]: E0226 11:16:39.512242 4699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="3.2s" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.878239 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.878318 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.927930 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.928442 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.928957 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.929200 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.929383 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.929548 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.929756 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.929924 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.930068 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:39 crc kubenswrapper[4699]: I0226 11:16:39.930435 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.197627 4699 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="48cf4b0cee4da5e5cbcf8557432f0b1f33a2168dd67b4f462eebcce08f058e73" exitCode=0 Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.197736 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"48cf4b0cee4da5e5cbcf8557432f0b1f33a2168dd67b4f462eebcce08f058e73"} Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.197770 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"481a819fe1f992a44f7b1ffb594159a9ca1def1223405b7ef17bae7b584fa892"} Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.198267 4699 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee33485d-044d-4356-a626-df5e4625a4f2" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.198293 4699 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee33485d-044d-4356-a626-df5e4625a4f2" Feb 26 11:16:40 crc kubenswrapper[4699]: E0226 11:16:40.198782 4699 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.199040 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.199464 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.199952 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.200279 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.201090 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.201558 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.203921 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.204345 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.204676 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.224580 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.224671 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.246736 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.247375 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.247728 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.248224 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.248498 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.248812 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.249194 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.249495 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.249727 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.250053 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.270098 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.270599 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.270774 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.270935 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.271099 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.271311 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.271475 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.271636 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.271963 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.272155 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.768964 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.769030 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.811010 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.812348 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.813166 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.813952 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.814360 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.814688 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.814967 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.815406 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.815983 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.816533 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.959272 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:16:40 crc kubenswrapper[4699]: I0226 11:16:40.960221 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.001224 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.002160 4699 status_manager.go:851] "Failed to get status for pod" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.002626 4699 status_manager.go:851] "Failed to get status for pod" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" pod="openshift-infra/auto-csr-approver-29535076-rv9x5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535076-rv9x5\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.002918 4699 status_manager.go:851] "Failed to get status for pod" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" pod="openshift-marketplace/redhat-marketplace-hrk4n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hrk4n\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.003276 4699 status_manager.go:851] "Failed to get status for pod" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" pod="openshift-marketplace/redhat-operators-jhgks" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jhgks\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.003519 4699 status_manager.go:851] "Failed to get status for pod" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" pod="openshift-marketplace/community-operators-czwkc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-czwkc\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.003792 4699 status_manager.go:851] "Failed to get status for pod" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" pod="openshift-marketplace/redhat-marketplace-s8kpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8kpz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.004215 4699 status_manager.go:851] "Failed to get status for pod" podUID="71a83978-4f86-404b-967a-0e7493ff6721" pod="openshift-marketplace/certified-operators-mzgjj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzgjj\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.004577 4699 status_manager.go:851] "Failed to get status for pod" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" pod="openshift-marketplace/certified-operators-phhbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-phhbz\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.004836 4699 status_manager.go:851] "Failed to get status for pod" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" pod="openshift-marketplace/redhat-operators-sc9c6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sc9c6\": dial tcp 38.102.83.213:6443: connect: connection refused" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.206710 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a1843c698a251c7f80d3727f1184692d6c1af6b5dced3224a5cd37e295f94ef1"} Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.249986 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.251885 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:16:41 crc kubenswrapper[4699]: I0226 11:16:41.257108 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:16:42 crc kubenswrapper[4699]: I0226 11:16:42.234283 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5cb7b42b2f5c346f9bff2b95ce8298fcf0e6608034e456bdd599fe9085cdf98e"} Feb 26 11:16:42 crc kubenswrapper[4699]: I0226 11:16:42.234667 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ca226e16abe2a265999c17c48d3416798a8bdb915eca4c716f1810a970605169"} Feb 26 11:16:43 crc kubenswrapper[4699]: I0226 11:16:43.261950 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 11:16:43 crc kubenswrapper[4699]: I0226 11:16:43.262954 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 11:16:43 crc kubenswrapper[4699]: I0226 11:16:43.263009 4699 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bcbdf473c08abfc93be6ee643eb86aebdaf8cae59cbe4c844b800862b15f7434" exitCode=1 Feb 26 11:16:43 crc kubenswrapper[4699]: I0226 11:16:43.263078 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bcbdf473c08abfc93be6ee643eb86aebdaf8cae59cbe4c844b800862b15f7434"} Feb 26 11:16:43 crc kubenswrapper[4699]: I0226 11:16:43.263436 4699 scope.go:117] "RemoveContainer" containerID="bcbdf473c08abfc93be6ee643eb86aebdaf8cae59cbe4c844b800862b15f7434" Feb 26 11:16:43 crc kubenswrapper[4699]: I0226 11:16:43.266614 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"95a41b7dc7585b88734e5f6819f1c30b7b13f4540c1937bb19c1f6585ca5ee27"} Feb 26 11:16:43 crc kubenswrapper[4699]: I0226 11:16:43.266667 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"22ea1c1238fe39be89eb8deeaff7ea021e9a6e811f07e3d77c21759cac2c4689"} Feb 26 11:16:43 crc kubenswrapper[4699]: I0226 11:16:43.266888 4699 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee33485d-044d-4356-a626-df5e4625a4f2" Feb 26 11:16:43 crc kubenswrapper[4699]: I0226 11:16:43.266920 4699 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee33485d-044d-4356-a626-df5e4625a4f2" Feb 26 11:16:43 crc kubenswrapper[4699]: I0226 11:16:43.319783 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:16:44 crc kubenswrapper[4699]: I0226 11:16:44.275686 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 11:16:44 crc kubenswrapper[4699]: I0226 11:16:44.276156 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:44 crc kubenswrapper[4699]: I0226 11:16:44.276201 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:44 crc kubenswrapper[4699]: I0226 11:16:44.276568 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 11:16:44 crc kubenswrapper[4699]: I0226 11:16:44.276619 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d8326aeadcc826fe0424355bd287bf65d6610bc258e44863eed96368db20aa6a"} Feb 26 11:16:44 crc kubenswrapper[4699]: I0226 11:16:44.281324 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:45 crc kubenswrapper[4699]: I0226 11:16:45.171612 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:16:46 crc kubenswrapper[4699]: I0226 11:16:46.260645 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:16:46 crc kubenswrapper[4699]: I0226 11:16:46.261172 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:16:46 crc kubenswrapper[4699]: W0226 11:16:46.690333 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-00bc26edafd1327bd975f1a97124eb62afc42bc7231a3d55b2cab5b23d8e9df2 WatchSource:0}: Error finding container 00bc26edafd1327bd975f1a97124eb62afc42bc7231a3d55b2cab5b23d8e9df2: Status 404 returned error can't find the container with id 00bc26edafd1327bd975f1a97124eb62afc42bc7231a3d55b2cab5b23d8e9df2 Feb 26 11:16:47 crc kubenswrapper[4699]: I0226 11:16:47.295244 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"00bc26edafd1327bd975f1a97124eb62afc42bc7231a3d55b2cab5b23d8e9df2"} Feb 26 11:16:48 crc kubenswrapper[4699]: I0226 11:16:48.277712 4699 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:48 crc kubenswrapper[4699]: I0226 11:16:48.303471 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3c4d331786b860f4598df47c583b55022d7939fed3acfc4cee5bc14012a5f9df"} Feb 26 11:16:48 crc kubenswrapper[4699]: I0226 11:16:48.303691 4699 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee33485d-044d-4356-a626-df5e4625a4f2" Feb 26 11:16:48 crc kubenswrapper[4699]: I0226 11:16:48.303713 4699 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee33485d-044d-4356-a626-df5e4625a4f2" Feb 26 11:16:48 crc kubenswrapper[4699]: I0226 11:16:48.303716 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:48 crc kubenswrapper[4699]: I0226 11:16:48.303837 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:16:48 crc kubenswrapper[4699]: I0226 11:16:48.308016 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:16:48 crc kubenswrapper[4699]: I0226 11:16:48.323844 4699 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="808de523-c5cd-43c6-9394-190a9608a367" Feb 26 11:16:49 crc kubenswrapper[4699]: I0226 11:16:49.307836 4699 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee33485d-044d-4356-a626-df5e4625a4f2" Feb 26 11:16:49 crc kubenswrapper[4699]: I0226 11:16:49.307865 4699 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ee33485d-044d-4356-a626-df5e4625a4f2" Feb 26 11:16:50 crc kubenswrapper[4699]: I0226 11:16:50.259912 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:16:50 crc kubenswrapper[4699]: I0226 11:16:50.260899 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:16:50 crc kubenswrapper[4699]: I0226 11:16:50.261299 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:16:50 crc kubenswrapper[4699]: I0226 11:16:50.261788 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:16:50 crc kubenswrapper[4699]: W0226 11:16:50.691585 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-da2cc71a1861730623d03d023ddc99bf68c166cd895e60b2f3c4083fe9be5b3e WatchSource:0}: Error finding container da2cc71a1861730623d03d023ddc99bf68c166cd895e60b2f3c4083fe9be5b3e: Status 404 returned error can't find the container with id da2cc71a1861730623d03d023ddc99bf68c166cd895e60b2f3c4083fe9be5b3e Feb 26 11:16:51 crc kubenswrapper[4699]: I0226 11:16:51.324345 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bb583ed365d8e16810e128b490773c9fa9cc1d3d176995bd5e50e802cb8d7f97"} Feb 26 11:16:51 crc kubenswrapper[4699]: I0226 11:16:51.325076 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"da2cc71a1861730623d03d023ddc99bf68c166cd895e60b2f3c4083fe9be5b3e"} Feb 26 11:16:51 crc kubenswrapper[4699]: I0226 11:16:51.331063 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"52ec8357ee858714f10f2561b43d88e0ba4c9d95314995e90ecdffb9dc1b0781"} Feb 26 11:16:51 crc kubenswrapper[4699]: I0226 11:16:51.331097 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8458cde859ed037ea958d3cd57ef1865030356ac78f9c293ef891e98f868d877"} Feb 26 11:16:53 crc kubenswrapper[4699]: I0226 11:16:53.320316 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:16:53 crc kubenswrapper[4699]: I0226 11:16:53.320543 4699 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 11:16:53 crc kubenswrapper[4699]: I0226 11:16:53.320696 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 11:16:53 crc kubenswrapper[4699]: I0226 11:16:53.345595 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 26 11:16:53 crc kubenswrapper[4699]: I0226 11:16:53.345646 4699 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="52ec8357ee858714f10f2561b43d88e0ba4c9d95314995e90ecdffb9dc1b0781" exitCode=255 Feb 26 11:16:53 crc kubenswrapper[4699]: I0226 11:16:53.345679 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"52ec8357ee858714f10f2561b43d88e0ba4c9d95314995e90ecdffb9dc1b0781"} Feb 26 11:16:53 crc kubenswrapper[4699]: I0226 11:16:53.346179 4699 scope.go:117] "RemoveContainer" containerID="52ec8357ee858714f10f2561b43d88e0ba4c9d95314995e90ecdffb9dc1b0781" Feb 26 11:16:54 crc kubenswrapper[4699]: I0226 11:16:54.354505 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 26 11:16:54 crc kubenswrapper[4699]: I0226 11:16:54.354896 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6185db4f6fa07a6ece323e079cb5de2c410483b56be6a8ee0c600f7ca82c1ef1"} Feb 26 11:16:55 crc kubenswrapper[4699]: I0226 11:16:55.361777 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 26 11:16:55 crc kubenswrapper[4699]: I0226 11:16:55.362191 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 26 11:16:55 crc kubenswrapper[4699]: I0226 11:16:55.362223 4699 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="6185db4f6fa07a6ece323e079cb5de2c410483b56be6a8ee0c600f7ca82c1ef1" exitCode=255 Feb 26 11:16:55 crc kubenswrapper[4699]: I0226 11:16:55.362249 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"6185db4f6fa07a6ece323e079cb5de2c410483b56be6a8ee0c600f7ca82c1ef1"} Feb 26 11:16:55 crc kubenswrapper[4699]: I0226 11:16:55.362281 4699 scope.go:117] "RemoveContainer" containerID="52ec8357ee858714f10f2561b43d88e0ba4c9d95314995e90ecdffb9dc1b0781" Feb 26 11:16:55 crc kubenswrapper[4699]: I0226 11:16:55.362884 4699 scope.go:117] "RemoveContainer" containerID="6185db4f6fa07a6ece323e079cb5de2c410483b56be6a8ee0c600f7ca82c1ef1" Feb 26 11:16:55 crc kubenswrapper[4699]: E0226 11:16:55.363164 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:16:56 crc kubenswrapper[4699]: I0226 11:16:56.290913 4699 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="808de523-c5cd-43c6-9394-190a9608a367" Feb 26 11:16:56 crc kubenswrapper[4699]: I0226 11:16:56.369838 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 26 11:16:58 crc kubenswrapper[4699]: I0226 11:16:58.016830 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 11:16:59 crc kubenswrapper[4699]: I0226 11:16:59.510633 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 11:17:00 crc kubenswrapper[4699]: I0226 11:17:00.185856 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 11:17:00 crc kubenswrapper[4699]: I0226 11:17:00.543791 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 11:17:00 crc kubenswrapper[4699]: I0226 11:17:00.683760 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 11:17:00 crc kubenswrapper[4699]: I0226 11:17:00.884791 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 11:17:01 crc kubenswrapper[4699]: I0226 11:17:01.214634 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 11:17:01 crc kubenswrapper[4699]: I0226 11:17:01.264158 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 11:17:01 crc kubenswrapper[4699]: I0226 11:17:01.326949 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 11:17:01 crc kubenswrapper[4699]: I0226 11:17:01.418174 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 11:17:01 crc kubenswrapper[4699]: I0226 11:17:01.445057 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 11:17:01 crc kubenswrapper[4699]: I0226 11:17:01.650854 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 11:17:01 crc kubenswrapper[4699]: I0226 11:17:01.711327 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 11:17:01 crc kubenswrapper[4699]: I0226 11:17:01.846489 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 11:17:01 crc kubenswrapper[4699]: I0226 11:17:01.889473 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 11:17:01 crc kubenswrapper[4699]: I0226 11:17:01.919410 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 11:17:01 crc kubenswrapper[4699]: I0226 11:17:01.929727 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 11:17:02 crc kubenswrapper[4699]: I0226 11:17:02.012931 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 11:17:02 crc kubenswrapper[4699]: I0226 11:17:02.117335 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 11:17:02 crc kubenswrapper[4699]: I0226 11:17:02.244452 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 11:17:02 crc kubenswrapper[4699]: I0226 11:17:02.274078 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 11:17:02 crc kubenswrapper[4699]: I0226 11:17:02.313761 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 11:17:02 crc kubenswrapper[4699]: I0226 11:17:02.382297 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 11:17:02 crc kubenswrapper[4699]: I0226 11:17:02.415432 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 11:17:02 crc kubenswrapper[4699]: I0226 11:17:02.435732 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 11:17:02 crc kubenswrapper[4699]: I0226 11:17:02.608662 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 11:17:02 crc kubenswrapper[4699]: I0226 11:17:02.955867 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 11:17:03 crc kubenswrapper[4699]: I0226 11:17:03.015433 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 11:17:03 crc kubenswrapper[4699]: I0226 11:17:03.194498 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 11:17:03 crc kubenswrapper[4699]: I0226 11:17:03.320159 4699 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 11:17:03 crc kubenswrapper[4699]: I0226 11:17:03.320237 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 11:17:03 crc kubenswrapper[4699]: I0226 11:17:03.333368 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 11:17:03 crc kubenswrapper[4699]: I0226 11:17:03.350871 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 11:17:03 crc kubenswrapper[4699]: I0226 11:17:03.451946 4699 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 11:17:03 crc kubenswrapper[4699]: I0226 11:17:03.522229 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 11:17:03 crc kubenswrapper[4699]: I0226 11:17:03.613889 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 11:17:03 crc kubenswrapper[4699]: I0226 11:17:03.750223 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.088343 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.100907 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.141444 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.169721 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.177419 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.227633 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.303773 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.428861 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.460838 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.498599 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.514653 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.593005 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.643868 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.645171 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.675954 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.801043 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.923514 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.939040 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 11:17:04 crc kubenswrapper[4699]: I0226 11:17:04.962132 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.055036 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.055860 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.070573 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.120370 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.139073 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.219380 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.286586 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.385461 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.405645 4699 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.425389 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.526487 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.666859 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.701488 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.788315 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.825303 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 11:17:05 crc kubenswrapper[4699]: I0226 11:17:05.852715 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.012869 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.019782 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.043552 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.053291 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.053881 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.059401 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.095170 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.145597 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.152755 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.166246 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.175755 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.261822 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.372166 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.418812 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.463181 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.548695 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.555080 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.719102 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.719190 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.744349 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.824562 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.825912 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.834888 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.837038 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.893772 4699 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.896083 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s8kpz" podStartSLOduration=39.400673462 podStartE2EDuration="2m57.896057454s" podCreationTimestamp="2026-02-26 11:14:09 +0000 UTC" firstStartedPulling="2026-02-26 11:14:12.21895184 +0000 UTC m=+198.029778274" lastFinishedPulling="2026-02-26 11:16:30.714335832 +0000 UTC m=+336.525162266" observedRunningTime="2026-02-26 11:16:47.611460029 +0000 UTC m=+353.422286473" watchObservedRunningTime="2026-02-26 11:17:06.896057454 +0000 UTC m=+372.706883888" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.896671 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sc9c6" podStartSLOduration=37.07970643 podStartE2EDuration="2m56.896665591s" podCreationTimestamp="2026-02-26 11:14:10 +0000 UTC" firstStartedPulling="2026-02-26 11:14:12.18032772 +0000 UTC m=+197.991154154" lastFinishedPulling="2026-02-26 11:16:31.997286881 +0000 UTC m=+337.808113315" observedRunningTime="2026-02-26 11:16:47.684248556 +0000 UTC m=+353.495075010" watchObservedRunningTime="2026-02-26 11:17:06.896665591 +0000 UTC m=+372.707492015" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.898046 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jhgks" podStartSLOduration=37.140931362 podStartE2EDuration="2m56.898037061s" podCreationTimestamp="2026-02-26 11:14:10 +0000 UTC" firstStartedPulling="2026-02-26 11:14:12.296659914 +0000 UTC m=+198.107486348" lastFinishedPulling="2026-02-26 11:16:32.053765593 +0000 UTC m=+337.864592047" observedRunningTime="2026-02-26 11:16:47.766610326 +0000 UTC m=+353.577436780" watchObservedRunningTime="2026-02-26 11:17:06.898037061 +0000 UTC m=+372.708863515" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.898657 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-phhbz" podStartSLOduration=39.172124348 podStartE2EDuration="2m59.898647198s" podCreationTimestamp="2026-02-26 11:14:07 +0000 UTC" firstStartedPulling="2026-02-26 11:14:11.114712168 +0000 UTC m=+196.925538602" lastFinishedPulling="2026-02-26 11:16:31.841235018 +0000 UTC m=+337.652061452" observedRunningTime="2026-02-26 11:16:47.663796682 +0000 UTC m=+353.474623126" watchObservedRunningTime="2026-02-26 11:17:06.898647198 +0000 UTC m=+372.709473642" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.898999 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-czwkc" podStartSLOduration=39.833540777 podStartE2EDuration="2m59.898992988s" podCreationTimestamp="2026-02-26 11:14:07 +0000 UTC" firstStartedPulling="2026-02-26 11:14:11.090009139 +0000 UTC m=+196.900835573" lastFinishedPulling="2026-02-26 11:16:31.15546134 +0000 UTC m=+336.966287784" observedRunningTime="2026-02-26 11:16:47.5943219 +0000 UTC m=+353.405148334" watchObservedRunningTime="2026-02-26 11:17:06.898992988 +0000 UTC m=+372.709819422" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.899869 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hrk4n" podStartSLOduration=38.40462806 podStartE2EDuration="2m57.899860393s" podCreationTimestamp="2026-02-26 11:14:09 +0000 UTC" firstStartedPulling="2026-02-26 11:14:12.188606254 +0000 UTC m=+197.999432688" lastFinishedPulling="2026-02-26 11:16:31.683838567 +0000 UTC m=+337.494665021" observedRunningTime="2026-02-26 11:16:47.735403415 +0000 UTC m=+353.546229859" watchObservedRunningTime="2026-02-26 11:17:06.899860393 +0000 UTC m=+372.710686827" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.900662 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.900721 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.905314 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.929530 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.929502098 podStartE2EDuration="18.929502098s" podCreationTimestamp="2026-02-26 11:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:17:06.925073902 +0000 UTC m=+372.735900346" watchObservedRunningTime="2026-02-26 11:17:06.929502098 +0000 UTC m=+372.740328532" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.964862 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.968036 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.972305 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.975954 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 11:17:06 crc kubenswrapper[4699]: I0226 11:17:06.979538 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.009962 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.041746 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.067781 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.072725 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.134067 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.161967 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.176782 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.192696 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.382761 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.430059 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.508974 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.553100 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.566989 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.600299 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.632306 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.679399 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.705188 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.724260 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.739241 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.826412 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.859038 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.879560 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.923610 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.931451 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.988029 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.988847 4699 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 11:17:07 crc kubenswrapper[4699]: I0226 11:17:07.994485 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.031629 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.091166 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.128268 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.130233 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.266586 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.287837 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.288280 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.334025 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.387083 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.387679 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.441123 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.441461 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.507754 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.530379 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.533421 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.621990 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.659533 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.679757 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.703881 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.717879 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.722889 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.723427 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.772819 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.789881 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.814642 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.821019 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.845615 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.855898 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.872156 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.884799 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 11:17:08 crc kubenswrapper[4699]: I0226 11:17:08.890058 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.063175 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.136772 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.145917 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.178531 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.181423 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.182539 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.261049 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.294541 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.399028 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.429409 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.536811 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.552194 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.581157 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.581430 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 11:17:09 crc kubenswrapper[4699]: I0226 11:17:09.992193 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.001349 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.002690 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.007960 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.201547 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.224566 4699 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.244342 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.260984 4699 scope.go:117] "RemoveContainer" containerID="6185db4f6fa07a6ece323e079cb5de2c410483b56be6a8ee0c600f7ca82c1ef1" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.314951 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.399412 4699 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.400078 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://faaf0aadacd79051543cdb9cfcd026bfc89d0e173e7f3faae8b64f52a92a3ab3" gracePeriod=5 Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.447698 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.451506 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.489616 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.518548 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.526609 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.574864 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.606999 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.613794 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.657233 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.790142 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.838873 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.855856 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.879429 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.881462 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.916654 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.923221 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.926792 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 11:17:10 crc kubenswrapper[4699]: I0226 11:17:10.984506 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.004425 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.025255 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.068045 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.107844 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.267643 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.379462 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.472273 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.510942 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.526267 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.657954 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.729397 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.742810 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.914869 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 11:17:11 crc kubenswrapper[4699]: I0226 11:17:11.958538 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.084936 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.122312 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.137641 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.163390 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.165093 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.351543 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.357688 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.365434 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.432818 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.458400 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.463424 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.463478 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4d3fcdf2fd98cdbc61858f6f5382fbea38fad4c688754a8c137d17494350094e"} Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.564830 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 11:17:12 crc kubenswrapper[4699]: I0226 11:17:12.781161 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.280994 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.320131 4699 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.320181 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.320230 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.320816 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"d8326aeadcc826fe0424355bd287bf65d6610bc258e44863eed96368db20aa6a"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.320927 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://d8326aeadcc826fe0424355bd287bf65d6610bc258e44863eed96368db20aa6a" gracePeriod=30 Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.362080 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.497723 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.563378 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.644053 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.678267 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.723665 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.754657 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.754862 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.761906 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.764137 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.900027 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.979317 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 11:17:13 crc kubenswrapper[4699]: I0226 11:17:13.980153 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 11:17:14 crc kubenswrapper[4699]: I0226 11:17:14.006001 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 11:17:14 crc kubenswrapper[4699]: I0226 11:17:14.008960 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 11:17:14 crc kubenswrapper[4699]: I0226 11:17:14.176577 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 11:17:14 crc kubenswrapper[4699]: I0226 11:17:14.184500 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 11:17:14 crc kubenswrapper[4699]: I0226 11:17:14.184807 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 11:17:14 crc kubenswrapper[4699]: I0226 11:17:14.226075 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 11:17:14 crc kubenswrapper[4699]: I0226 11:17:14.367914 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 11:17:14 crc kubenswrapper[4699]: I0226 11:17:14.498002 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 11:17:14 crc kubenswrapper[4699]: I0226 11:17:14.584151 4699 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 11:17:14 crc kubenswrapper[4699]: I0226 11:17:14.703864 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 11:17:14 crc kubenswrapper[4699]: I0226 11:17:14.878218 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 11:17:15 crc kubenswrapper[4699]: I0226 11:17:15.490096 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 11:17:15 crc kubenswrapper[4699]: I0226 11:17:15.490329 4699 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="faaf0aadacd79051543cdb9cfcd026bfc89d0e173e7f3faae8b64f52a92a3ab3" exitCode=137 Feb 26 11:17:15 crc kubenswrapper[4699]: I0226 11:17:15.996257 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 11:17:15 crc kubenswrapper[4699]: I0226 11:17:15.996336 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.070304 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.087483 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.087582 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.087586 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.087653 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.087787 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.087818 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.087840 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.087864 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.087939 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.088163 4699 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.088178 4699 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.088186 4699 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.088213 4699 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.099439 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.129417 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.189745 4699 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.266630 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.375083 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.498617 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.498725 4699 scope.go:117] "RemoveContainer" containerID="faaf0aadacd79051543cdb9cfcd026bfc89d0e173e7f3faae8b64f52a92a3ab3" Feb 26 11:17:16 crc kubenswrapper[4699]: I0226 11:17:16.498761 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:17:24 crc kubenswrapper[4699]: I0226 11:17:24.476104 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:17:43 crc kubenswrapper[4699]: I0226 11:17:43.668351 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 26 11:17:43 crc kubenswrapper[4699]: I0226 11:17:43.671366 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 11:17:43 crc kubenswrapper[4699]: I0226 11:17:43.673177 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 11:17:43 crc kubenswrapper[4699]: I0226 11:17:43.673224 4699 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d8326aeadcc826fe0424355bd287bf65d6610bc258e44863eed96368db20aa6a" exitCode=137 Feb 26 11:17:43 crc kubenswrapper[4699]: I0226 11:17:43.673257 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d8326aeadcc826fe0424355bd287bf65d6610bc258e44863eed96368db20aa6a"} Feb 26 11:17:43 crc kubenswrapper[4699]: I0226 11:17:43.673290 4699 scope.go:117] "RemoveContainer" containerID="bcbdf473c08abfc93be6ee643eb86aebdaf8cae59cbe4c844b800862b15f7434" Feb 26 11:17:44 crc kubenswrapper[4699]: I0226 11:17:44.684029 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 26 11:17:44 crc kubenswrapper[4699]: I0226 11:17:44.686680 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 11:17:44 crc kubenswrapper[4699]: I0226 11:17:44.686761 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ab6a01aa53c295261f294f7fc7e981271d68ba8a24167aed4f9a82edd5da1265"} Feb 26 11:17:45 crc kubenswrapper[4699]: I0226 11:17:45.171261 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:17:53 crc kubenswrapper[4699]: I0226 11:17:53.320531 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:17:53 crc kubenswrapper[4699]: I0226 11:17:53.325243 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:17:55 crc kubenswrapper[4699]: I0226 11:17:55.176514 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:17:59 crc kubenswrapper[4699]: I0226 11:17:59.909337 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-24qnt"] Feb 26 11:17:59 crc kubenswrapper[4699]: E0226 11:17:59.911348 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 11:17:59 crc kubenswrapper[4699]: I0226 11:17:59.911536 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 11:17:59 crc kubenswrapper[4699]: E0226 11:17:59.911672 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" containerName="oc" Feb 26 11:17:59 crc kubenswrapper[4699]: I0226 11:17:59.911784 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" containerName="oc" Feb 26 11:17:59 crc kubenswrapper[4699]: E0226 11:17:59.911902 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" containerName="installer" Feb 26 11:17:59 crc kubenswrapper[4699]: I0226 11:17:59.912008 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" containerName="installer" Feb 26 11:17:59 crc kubenswrapper[4699]: I0226 11:17:59.912341 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 11:17:59 crc kubenswrapper[4699]: I0226 11:17:59.912458 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="a904aa73-23d7-4994-882a-4afafe02fb82" containerName="installer" Feb 26 11:17:59 crc kubenswrapper[4699]: I0226 11:17:59.912548 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" containerName="oc" Feb 26 11:17:59 crc kubenswrapper[4699]: I0226 11:17:59.918514 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:17:59 crc kubenswrapper[4699]: I0226 11:17:59.970773 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-24qnt"] Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.027470 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de71c708-910f-44de-8a6c-93671ddc16ec-ca-trust-extracted\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.027613 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfdh\" (UniqueName: \"kubernetes.io/projected/de71c708-910f-44de-8a6c-93671ddc16ec-kube-api-access-5dfdh\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.027645 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de71c708-910f-44de-8a6c-93671ddc16ec-trusted-ca\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.027775 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de71c708-910f-44de-8a6c-93671ddc16ec-installation-pull-secrets\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.027853 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de71c708-910f-44de-8a6c-93671ddc16ec-bound-sa-token\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.027887 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de71c708-910f-44de-8a6c-93671ddc16ec-registry-certificates\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.027950 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de71c708-910f-44de-8a6c-93671ddc16ec-registry-tls\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.028025 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.094040 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.129170 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de71c708-910f-44de-8a6c-93671ddc16ec-ca-trust-extracted\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.129256 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dfdh\" (UniqueName: \"kubernetes.io/projected/de71c708-910f-44de-8a6c-93671ddc16ec-kube-api-access-5dfdh\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.129279 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de71c708-910f-44de-8a6c-93671ddc16ec-trusted-ca\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.129303 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de71c708-910f-44de-8a6c-93671ddc16ec-installation-pull-secrets\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.129325 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de71c708-910f-44de-8a6c-93671ddc16ec-bound-sa-token\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.129344 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de71c708-910f-44de-8a6c-93671ddc16ec-registry-tls\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.129358 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de71c708-910f-44de-8a6c-93671ddc16ec-registry-certificates\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.130533 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de71c708-910f-44de-8a6c-93671ddc16ec-registry-certificates\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.130790 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de71c708-910f-44de-8a6c-93671ddc16ec-ca-trust-extracted\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.131756 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de71c708-910f-44de-8a6c-93671ddc16ec-trusted-ca\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.138282 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de71c708-910f-44de-8a6c-93671ddc16ec-installation-pull-secrets\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.138530 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de71c708-910f-44de-8a6c-93671ddc16ec-registry-tls\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.154619 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dfdh\" (UniqueName: \"kubernetes.io/projected/de71c708-910f-44de-8a6c-93671ddc16ec-kube-api-access-5dfdh\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.160431 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de71c708-910f-44de-8a6c-93671ddc16ec-bound-sa-token\") pod \"image-registry-66df7c8f76-24qnt\" (UID: \"de71c708-910f-44de-8a6c-93671ddc16ec\") " pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.171459 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535078-ktbp9"] Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.172337 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535078-ktbp9" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.178284 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535078-ktbp9"] Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.179009 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.179019 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.179433 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.234805 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.331665 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rggfp\" (UniqueName: \"kubernetes.io/projected/4c181d85-a2e5-4771-a5a7-6cdd1f944012-kube-api-access-rggfp\") pod \"auto-csr-approver-29535078-ktbp9\" (UID: \"4c181d85-a2e5-4771-a5a7-6cdd1f944012\") " pod="openshift-infra/auto-csr-approver-29535078-ktbp9" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.433151 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rggfp\" (UniqueName: \"kubernetes.io/projected/4c181d85-a2e5-4771-a5a7-6cdd1f944012-kube-api-access-rggfp\") pod \"auto-csr-approver-29535078-ktbp9\" (UID: \"4c181d85-a2e5-4771-a5a7-6cdd1f944012\") " pod="openshift-infra/auto-csr-approver-29535078-ktbp9" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.453602 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rggfp\" (UniqueName: \"kubernetes.io/projected/4c181d85-a2e5-4771-a5a7-6cdd1f944012-kube-api-access-rggfp\") pod \"auto-csr-approver-29535078-ktbp9\" (UID: \"4c181d85-a2e5-4771-a5a7-6cdd1f944012\") " pod="openshift-infra/auto-csr-approver-29535078-ktbp9" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.508991 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535078-ktbp9" Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.687658 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-24qnt"] Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.725919 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535078-ktbp9"] Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.786310 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" event={"ID":"de71c708-910f-44de-8a6c-93671ddc16ec","Type":"ContainerStarted","Data":"9cad155f29ac39dc68f3853e794d34c9ce1f1e95cf1f7316ec414ba766fa4b92"} Feb 26 11:18:00 crc kubenswrapper[4699]: I0226 11:18:00.788821 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535078-ktbp9" event={"ID":"4c181d85-a2e5-4771-a5a7-6cdd1f944012","Type":"ContainerStarted","Data":"a8b9a3e7ee013f3209491e0261ab397a01ff3add4cc6f302c1c814bc438cbfa0"} Feb 26 11:18:01 crc kubenswrapper[4699]: I0226 11:18:01.800697 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" event={"ID":"de71c708-910f-44de-8a6c-93671ddc16ec","Type":"ContainerStarted","Data":"755cf55d6dbdcb454ce511304bc1376acc6aa101a58f6c31fa5b30321cd709ed"} Feb 26 11:18:01 crc kubenswrapper[4699]: I0226 11:18:01.801818 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:01 crc kubenswrapper[4699]: I0226 11:18:01.824671 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" podStartSLOduration=2.8246496260000002 podStartE2EDuration="2.824649626s" podCreationTimestamp="2026-02-26 11:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:18:01.820198647 +0000 UTC m=+427.631025101" watchObservedRunningTime="2026-02-26 11:18:01.824649626 +0000 UTC m=+427.635476060" Feb 26 11:18:02 crc kubenswrapper[4699]: I0226 11:18:02.811613 4699 generic.go:334] "Generic (PLEG): container finished" podID="4c181d85-a2e5-4771-a5a7-6cdd1f944012" containerID="1eda56a25e25c14621838f63ba6ea80e65461406feb4a8836fe9fda800de7616" exitCode=0 Feb 26 11:18:02 crc kubenswrapper[4699]: I0226 11:18:02.811762 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535078-ktbp9" event={"ID":"4c181d85-a2e5-4771-a5a7-6cdd1f944012","Type":"ContainerDied","Data":"1eda56a25e25c14621838f63ba6ea80e65461406feb4a8836fe9fda800de7616"} Feb 26 11:18:04 crc kubenswrapper[4699]: I0226 11:18:04.092506 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535078-ktbp9" Feb 26 11:18:04 crc kubenswrapper[4699]: I0226 11:18:04.185277 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rggfp\" (UniqueName: \"kubernetes.io/projected/4c181d85-a2e5-4771-a5a7-6cdd1f944012-kube-api-access-rggfp\") pod \"4c181d85-a2e5-4771-a5a7-6cdd1f944012\" (UID: \"4c181d85-a2e5-4771-a5a7-6cdd1f944012\") " Feb 26 11:18:04 crc kubenswrapper[4699]: I0226 11:18:04.191659 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c181d85-a2e5-4771-a5a7-6cdd1f944012-kube-api-access-rggfp" (OuterVolumeSpecName: "kube-api-access-rggfp") pod "4c181d85-a2e5-4771-a5a7-6cdd1f944012" (UID: "4c181d85-a2e5-4771-a5a7-6cdd1f944012"). InnerVolumeSpecName "kube-api-access-rggfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:04 crc kubenswrapper[4699]: I0226 11:18:04.286868 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rggfp\" (UniqueName: \"kubernetes.io/projected/4c181d85-a2e5-4771-a5a7-6cdd1f944012-kube-api-access-rggfp\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:04 crc kubenswrapper[4699]: I0226 11:18:04.824719 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535078-ktbp9" event={"ID":"4c181d85-a2e5-4771-a5a7-6cdd1f944012","Type":"ContainerDied","Data":"a8b9a3e7ee013f3209491e0261ab397a01ff3add4cc6f302c1c814bc438cbfa0"} Feb 26 11:18:04 crc kubenswrapper[4699]: I0226 11:18:04.824985 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b9a3e7ee013f3209491e0261ab397a01ff3add4cc6f302c1c814bc438cbfa0" Feb 26 11:18:04 crc kubenswrapper[4699]: I0226 11:18:04.824764 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535078-ktbp9" Feb 26 11:18:11 crc kubenswrapper[4699]: I0226 11:18:11.585510 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:18:11 crc kubenswrapper[4699]: I0226 11:18:11.585586 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:18:20 crc kubenswrapper[4699]: I0226 11:18:20.241474 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-24qnt" Feb 26 11:18:20 crc kubenswrapper[4699]: I0226 11:18:20.315889 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8656"] Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.737397 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzgjj"] Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.738501 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mzgjj" podUID="71a83978-4f86-404b-967a-0e7493ff6721" containerName="registry-server" containerID="cri-o://c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a" gracePeriod=30 Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.743368 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phhbz"] Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.743644 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-phhbz" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" containerName="registry-server" containerID="cri-o://4459df84e7aab7535bf4732238c87c4da5222e3237b69439fff20886a1ea7688" gracePeriod=30 Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.765556 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-czwkc"] Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.766035 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-czwkc" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerName="registry-server" containerID="cri-o://b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca" gracePeriod=30 Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.774793 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cd5qf"] Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.775212 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" podUID="5cc10041-704b-4b00-8e4e-369103434b64" containerName="marketplace-operator" containerID="cri-o://2d5a0c0e5922846f3c8cbf86200e16bf7b7b0416026b9755460756c2a821cc04" gracePeriod=30 Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.797686 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrk4n"] Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.798060 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hrk4n" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" containerName="registry-server" containerID="cri-o://aa39a04716f8cdcc694931265437b30c1cb1c3615ab11016472ce3e95c18688b" gracePeriod=30 Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.804021 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8kpz"] Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.804515 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s8kpz" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" containerName="registry-server" containerID="cri-o://bca562e2d2fabc5097841d6398d6c6b6a6779605566f4dd3173111ee1e8c04f3" gracePeriod=30 Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.817573 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nwbkq"] Feb 26 11:18:26 crc kubenswrapper[4699]: E0226 11:18:26.817841 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c181d85-a2e5-4771-a5a7-6cdd1f944012" containerName="oc" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.817856 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c181d85-a2e5-4771-a5a7-6cdd1f944012" containerName="oc" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.817978 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c181d85-a2e5-4771-a5a7-6cdd1f944012" containerName="oc" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.818438 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.822105 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/43a980f6-1eff-4610-aa3e-69729c3eb7c7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nwbkq\" (UID: \"43a980f6-1eff-4610-aa3e-69729c3eb7c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.822178 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tff82\" (UniqueName: \"kubernetes.io/projected/43a980f6-1eff-4610-aa3e-69729c3eb7c7-kube-api-access-tff82\") pod \"marketplace-operator-79b997595-nwbkq\" (UID: \"43a980f6-1eff-4610-aa3e-69729c3eb7c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.822198 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43a980f6-1eff-4610-aa3e-69729c3eb7c7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nwbkq\" (UID: \"43a980f6-1eff-4610-aa3e-69729c3eb7c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.822475 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jhgks"] Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.822777 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jhgks" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" containerName="registry-server" containerID="cri-o://58ec65080ac68341b08f4272194fe62d85383a27766f002151749856e7c508e7" gracePeriod=30 Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.827722 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sc9c6"] Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.828009 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sc9c6" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" containerName="registry-server" containerID="cri-o://704c8ba25f50ac5c881bb9d05eb872ee3851c9a21d28c2acf7a27a400acbebe0" gracePeriod=30 Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.830361 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nwbkq"] Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.923725 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43a980f6-1eff-4610-aa3e-69729c3eb7c7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nwbkq\" (UID: \"43a980f6-1eff-4610-aa3e-69729c3eb7c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.924214 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/43a980f6-1eff-4610-aa3e-69729c3eb7c7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nwbkq\" (UID: \"43a980f6-1eff-4610-aa3e-69729c3eb7c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.924355 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tff82\" (UniqueName: \"kubernetes.io/projected/43a980f6-1eff-4610-aa3e-69729c3eb7c7-kube-api-access-tff82\") pod \"marketplace-operator-79b997595-nwbkq\" (UID: \"43a980f6-1eff-4610-aa3e-69729c3eb7c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.925540 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43a980f6-1eff-4610-aa3e-69729c3eb7c7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nwbkq\" (UID: \"43a980f6-1eff-4610-aa3e-69729c3eb7c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.930720 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/43a980f6-1eff-4610-aa3e-69729c3eb7c7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nwbkq\" (UID: \"43a980f6-1eff-4610-aa3e-69729c3eb7c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:26 crc kubenswrapper[4699]: I0226 11:18:26.948347 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tff82\" (UniqueName: \"kubernetes.io/projected/43a980f6-1eff-4610-aa3e-69729c3eb7c7-kube-api-access-tff82\") pod \"marketplace-operator-79b997595-nwbkq\" (UID: \"43a980f6-1eff-4610-aa3e-69729c3eb7c7\") " pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.021696 4699 generic.go:334] "Generic (PLEG): container finished" podID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerID="b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca" exitCode=0 Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.021763 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czwkc" event={"ID":"ac0026c3-1fad-4b34-9c42-389971f0c773","Type":"ContainerDied","Data":"b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca"} Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.025952 4699 generic.go:334] "Generic (PLEG): container finished" podID="6b9da973-6b5f-4485-adca-8792b0a3d256" containerID="58ec65080ac68341b08f4272194fe62d85383a27766f002151749856e7c508e7" exitCode=0 Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.026037 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhgks" event={"ID":"6b9da973-6b5f-4485-adca-8792b0a3d256","Type":"ContainerDied","Data":"58ec65080ac68341b08f4272194fe62d85383a27766f002151749856e7c508e7"} Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.028159 4699 generic.go:334] "Generic (PLEG): container finished" podID="44d171ad-7d92-4c70-a686-65f60ded8a03" containerID="704c8ba25f50ac5c881bb9d05eb872ee3851c9a21d28c2acf7a27a400acbebe0" exitCode=0 Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.028606 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc9c6" event={"ID":"44d171ad-7d92-4c70-a686-65f60ded8a03","Type":"ContainerDied","Data":"704c8ba25f50ac5c881bb9d05eb872ee3851c9a21d28c2acf7a27a400acbebe0"} Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.030974 4699 generic.go:334] "Generic (PLEG): container finished" podID="6e7ddf51-5522-4085-8567-76c9a254ed15" containerID="aa39a04716f8cdcc694931265437b30c1cb1c3615ab11016472ce3e95c18688b" exitCode=0 Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.031073 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrk4n" event={"ID":"6e7ddf51-5522-4085-8567-76c9a254ed15","Type":"ContainerDied","Data":"aa39a04716f8cdcc694931265437b30c1cb1c3615ab11016472ce3e95c18688b"} Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.033480 4699 generic.go:334] "Generic (PLEG): container finished" podID="8c96a703-e568-4916-8035-a951ae91dc2b" containerID="bca562e2d2fabc5097841d6398d6c6b6a6779605566f4dd3173111ee1e8c04f3" exitCode=0 Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.033577 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8kpz" event={"ID":"8c96a703-e568-4916-8035-a951ae91dc2b","Type":"ContainerDied","Data":"bca562e2d2fabc5097841d6398d6c6b6a6779605566f4dd3173111ee1e8c04f3"} Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.036027 4699 generic.go:334] "Generic (PLEG): container finished" podID="9ea10063-7888-400e-af1c-216cbde5a13e" containerID="4459df84e7aab7535bf4732238c87c4da5222e3237b69439fff20886a1ea7688" exitCode=0 Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.036104 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhbz" event={"ID":"9ea10063-7888-400e-af1c-216cbde5a13e","Type":"ContainerDied","Data":"4459df84e7aab7535bf4732238c87c4da5222e3237b69439fff20886a1ea7688"} Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.037375 4699 generic.go:334] "Generic (PLEG): container finished" podID="5cc10041-704b-4b00-8e4e-369103434b64" containerID="2d5a0c0e5922846f3c8cbf86200e16bf7b7b0416026b9755460756c2a821cc04" exitCode=0 Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.037438 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" event={"ID":"5cc10041-704b-4b00-8e4e-369103434b64","Type":"ContainerDied","Data":"2d5a0c0e5922846f3c8cbf86200e16bf7b7b0416026b9755460756c2a821cc04"} Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.039531 4699 generic.go:334] "Generic (PLEG): container finished" podID="71a83978-4f86-404b-967a-0e7493ff6721" containerID="c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a" exitCode=0 Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.039633 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzgjj" event={"ID":"71a83978-4f86-404b-967a-0e7493ff6721","Type":"ContainerDied","Data":"c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a"} Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.152538 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:27 crc kubenswrapper[4699]: E0226 11:18:27.576825 4699 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a is running failed: container process not found" containerID="c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 11:18:27 crc kubenswrapper[4699]: E0226 11:18:27.577749 4699 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a is running failed: container process not found" containerID="c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 11:18:27 crc kubenswrapper[4699]: E0226 11:18:27.578491 4699 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a is running failed: container process not found" containerID="c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 11:18:27 crc kubenswrapper[4699]: E0226 11:18:27.578534 4699 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-mzgjj" podUID="71a83978-4f86-404b-967a-0e7493ff6721" containerName="registry-server" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.596616 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nwbkq"] Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.662910 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:18:27 crc kubenswrapper[4699]: E0226 11:18:27.684716 4699 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca is running failed: container process not found" containerID="b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 11:18:27 crc kubenswrapper[4699]: E0226 11:18:27.687543 4699 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca is running failed: container process not found" containerID="b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 11:18:27 crc kubenswrapper[4699]: E0226 11:18:27.688134 4699 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca is running failed: container process not found" containerID="b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 11:18:27 crc kubenswrapper[4699]: E0226 11:18:27.688176 4699 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-czwkc" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerName="registry-server" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.719323 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.841839 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-catalog-content\") pod \"9ea10063-7888-400e-af1c-216cbde5a13e\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.841897 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-utilities\") pod \"71a83978-4f86-404b-967a-0e7493ff6721\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.841929 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z6wd\" (UniqueName: \"kubernetes.io/projected/71a83978-4f86-404b-967a-0e7493ff6721-kube-api-access-9z6wd\") pod \"71a83978-4f86-404b-967a-0e7493ff6721\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.841969 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-catalog-content\") pod \"71a83978-4f86-404b-967a-0e7493ff6721\" (UID: \"71a83978-4f86-404b-967a-0e7493ff6721\") " Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.842043 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-699tw\" (UniqueName: \"kubernetes.io/projected/9ea10063-7888-400e-af1c-216cbde5a13e-kube-api-access-699tw\") pod \"9ea10063-7888-400e-af1c-216cbde5a13e\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.842066 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-utilities\") pod \"9ea10063-7888-400e-af1c-216cbde5a13e\" (UID: \"9ea10063-7888-400e-af1c-216cbde5a13e\") " Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.842969 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-utilities" (OuterVolumeSpecName: "utilities") pod "9ea10063-7888-400e-af1c-216cbde5a13e" (UID: "9ea10063-7888-400e-af1c-216cbde5a13e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.851013 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea10063-7888-400e-af1c-216cbde5a13e-kube-api-access-699tw" (OuterVolumeSpecName: "kube-api-access-699tw") pod "9ea10063-7888-400e-af1c-216cbde5a13e" (UID: "9ea10063-7888-400e-af1c-216cbde5a13e"). InnerVolumeSpecName "kube-api-access-699tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.853779 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-utilities" (OuterVolumeSpecName: "utilities") pod "71a83978-4f86-404b-967a-0e7493ff6721" (UID: "71a83978-4f86-404b-967a-0e7493ff6721"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.854583 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a83978-4f86-404b-967a-0e7493ff6721-kube-api-access-9z6wd" (OuterVolumeSpecName: "kube-api-access-9z6wd") pod "71a83978-4f86-404b-967a-0e7493ff6721" (UID: "71a83978-4f86-404b-967a-0e7493ff6721"). InnerVolumeSpecName "kube-api-access-9z6wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.923420 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ea10063-7888-400e-af1c-216cbde5a13e" (UID: "9ea10063-7888-400e-af1c-216cbde5a13e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.924904 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71a83978-4f86-404b-967a-0e7493ff6721" (UID: "71a83978-4f86-404b-967a-0e7493ff6721"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.950289 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.950787 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.950816 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z6wd\" (UniqueName: \"kubernetes.io/projected/71a83978-4f86-404b-967a-0e7493ff6721-kube-api-access-9z6wd\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.950830 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71a83978-4f86-404b-967a-0e7493ff6721-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.950842 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-699tw\" (UniqueName: \"kubernetes.io/projected/9ea10063-7888-400e-af1c-216cbde5a13e-kube-api-access-699tw\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.950853 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea10063-7888-400e-af1c-216cbde5a13e-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:27 crc kubenswrapper[4699]: I0226 11:18:27.996804 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.008765 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.010524 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.023024 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.035368 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.036355 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.054402 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" event={"ID":"5cc10041-704b-4b00-8e4e-369103434b64","Type":"ContainerDied","Data":"be07ebbed72d10e6a52397198b9b567e946941b2a2ee6b1a35e4358ea9958b9f"} Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.054455 4699 scope.go:117] "RemoveContainer" containerID="2d5a0c0e5922846f3c8cbf86200e16bf7b7b0416026b9755460756c2a821cc04" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.054562 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cd5qf" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.058363 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzgjj" event={"ID":"71a83978-4f86-404b-967a-0e7493ff6721","Type":"ContainerDied","Data":"1c165f4cac2c47ef0e2f5ea976276ea6634d20dbf88d2b070f23064d87eecce4"} Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.058499 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzgjj" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.070505 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhgks" event={"ID":"6b9da973-6b5f-4485-adca-8792b0a3d256","Type":"ContainerDied","Data":"1df59f3f6cf47eeaee6c7803f5d095457eb18adeaca6dc9c81e5b0dfb758e003"} Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.070564 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhgks" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.072911 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8kpz" event={"ID":"8c96a703-e568-4916-8035-a951ae91dc2b","Type":"ContainerDied","Data":"18a720cd12fbf1604976388b722cf7ea85f1660cb3d90ac7f016d51d465b43d1"} Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.073027 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8kpz" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.075457 4699 scope.go:117] "RemoveContainer" containerID="c40aadf0dd4571fbdbc8a87295a7b711ac5b2622ecf8b10a9b30c71872d6459a" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.079430 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhbz" event={"ID":"9ea10063-7888-400e-af1c-216cbde5a13e","Type":"ContainerDied","Data":"c8dc58ce346d0f6b6aad0363b33f0cf4112745523923e0e7d1cf3d865b90372a"} Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.079498 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phhbz" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.081561 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czwkc" event={"ID":"ac0026c3-1fad-4b34-9c42-389971f0c773","Type":"ContainerDied","Data":"31376761fbf12a5b81018d6bde894ab4db92607e39e297d6342dce3d31049346"} Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.081636 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czwkc" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.084792 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc9c6" event={"ID":"44d171ad-7d92-4c70-a686-65f60ded8a03","Type":"ContainerDied","Data":"8416abc544344d1375d554f38d43ac67e9642de8063e20464268f9eaf0d51147"} Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.084867 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc9c6" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.086065 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" event={"ID":"43a980f6-1eff-4610-aa3e-69729c3eb7c7","Type":"ContainerStarted","Data":"1f6aa9e2c51069147ca61695d18adc4bd68eb808c51979e71400258ae22e6a56"} Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.086100 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" event={"ID":"43a980f6-1eff-4610-aa3e-69729c3eb7c7","Type":"ContainerStarted","Data":"cafc0def26259d80ebcc7f8a94991c05183edc9e566b9ec57a181642d2661d9b"} Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.088218 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.095747 4699 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nwbkq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.095950 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" podUID="43a980f6-1eff-4610-aa3e-69729c3eb7c7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.099254 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrk4n" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.099107 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrk4n" event={"ID":"6e7ddf51-5522-4085-8567-76c9a254ed15","Type":"ContainerDied","Data":"64ab7f5c1142b79d1cad6017fda721d048cccdd042121faa577213948620ffa2"} Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.115993 4699 scope.go:117] "RemoveContainer" containerID="f41fa5d8badc750f1371bec0896b93547f2bd25c6f1942a17a10cfb9c1edba94" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.136988 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phhbz"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.141135 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-phhbz"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153561 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqrqs\" (UniqueName: \"kubernetes.io/projected/ac0026c3-1fad-4b34-9c42-389971f0c773-kube-api-access-rqrqs\") pod \"ac0026c3-1fad-4b34-9c42-389971f0c773\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153624 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-catalog-content\") pod \"6b9da973-6b5f-4485-adca-8792b0a3d256\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153646 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-catalog-content\") pod \"6e7ddf51-5522-4085-8567-76c9a254ed15\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153663 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tqhd\" (UniqueName: \"kubernetes.io/projected/44d171ad-7d92-4c70-a686-65f60ded8a03-kube-api-access-2tqhd\") pod \"44d171ad-7d92-4c70-a686-65f60ded8a03\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153682 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-utilities\") pod \"6b9da973-6b5f-4485-adca-8792b0a3d256\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153699 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-catalog-content\") pod \"44d171ad-7d92-4c70-a686-65f60ded8a03\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153719 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-trusted-ca\") pod \"5cc10041-704b-4b00-8e4e-369103434b64\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153766 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr725\" (UniqueName: \"kubernetes.io/projected/8c96a703-e568-4916-8035-a951ae91dc2b-kube-api-access-rr725\") pod \"8c96a703-e568-4916-8035-a951ae91dc2b\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153799 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xhdp\" (UniqueName: \"kubernetes.io/projected/6e7ddf51-5522-4085-8567-76c9a254ed15-kube-api-access-7xhdp\") pod \"6e7ddf51-5522-4085-8567-76c9a254ed15\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153817 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-utilities\") pod \"8c96a703-e568-4916-8035-a951ae91dc2b\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153849 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-catalog-content\") pod \"8c96a703-e568-4916-8035-a951ae91dc2b\" (UID: \"8c96a703-e568-4916-8035-a951ae91dc2b\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153869 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44jnw\" (UniqueName: \"kubernetes.io/projected/6b9da973-6b5f-4485-adca-8792b0a3d256-kube-api-access-44jnw\") pod \"6b9da973-6b5f-4485-adca-8792b0a3d256\" (UID: \"6b9da973-6b5f-4485-adca-8792b0a3d256\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153886 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwq64\" (UniqueName: \"kubernetes.io/projected/5cc10041-704b-4b00-8e4e-369103434b64-kube-api-access-bwq64\") pod \"5cc10041-704b-4b00-8e4e-369103434b64\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153923 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-catalog-content\") pod \"ac0026c3-1fad-4b34-9c42-389971f0c773\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153941 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-operator-metrics\") pod \"5cc10041-704b-4b00-8e4e-369103434b64\" (UID: \"5cc10041-704b-4b00-8e4e-369103434b64\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153962 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-utilities\") pod \"ac0026c3-1fad-4b34-9c42-389971f0c773\" (UID: \"ac0026c3-1fad-4b34-9c42-389971f0c773\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.153987 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-utilities\") pod \"44d171ad-7d92-4c70-a686-65f60ded8a03\" (UID: \"44d171ad-7d92-4c70-a686-65f60ded8a03\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.154015 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-utilities\") pod \"6e7ddf51-5522-4085-8567-76c9a254ed15\" (UID: \"6e7ddf51-5522-4085-8567-76c9a254ed15\") " Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.158857 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5cc10041-704b-4b00-8e4e-369103434b64" (UID: "5cc10041-704b-4b00-8e4e-369103434b64"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.160057 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7ddf51-5522-4085-8567-76c9a254ed15-kube-api-access-7xhdp" (OuterVolumeSpecName: "kube-api-access-7xhdp") pod "6e7ddf51-5522-4085-8567-76c9a254ed15" (UID: "6e7ddf51-5522-4085-8567-76c9a254ed15"). InnerVolumeSpecName "kube-api-access-7xhdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.160466 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-utilities" (OuterVolumeSpecName: "utilities") pod "ac0026c3-1fad-4b34-9c42-389971f0c773" (UID: "ac0026c3-1fad-4b34-9c42-389971f0c773"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.161855 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-utilities" (OuterVolumeSpecName: "utilities") pod "44d171ad-7d92-4c70-a686-65f60ded8a03" (UID: "44d171ad-7d92-4c70-a686-65f60ded8a03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.162756 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-utilities" (OuterVolumeSpecName: "utilities") pod "6e7ddf51-5522-4085-8567-76c9a254ed15" (UID: "6e7ddf51-5522-4085-8567-76c9a254ed15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.162927 4699 scope.go:117] "RemoveContainer" containerID="f1b31944470f82af52e860af7004767cf2db0ef2acdf2a9986adc95701213e55" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.165175 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-utilities" (OuterVolumeSpecName: "utilities") pod "8c96a703-e568-4916-8035-a951ae91dc2b" (UID: "8c96a703-e568-4916-8035-a951ae91dc2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.168306 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-utilities" (OuterVolumeSpecName: "utilities") pod "6b9da973-6b5f-4485-adca-8792b0a3d256" (UID: "6b9da973-6b5f-4485-adca-8792b0a3d256"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.169554 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac0026c3-1fad-4b34-9c42-389971f0c773-kube-api-access-rqrqs" (OuterVolumeSpecName: "kube-api-access-rqrqs") pod "ac0026c3-1fad-4b34-9c42-389971f0c773" (UID: "ac0026c3-1fad-4b34-9c42-389971f0c773"). InnerVolumeSpecName "kube-api-access-rqrqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.172950 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cc10041-704b-4b00-8e4e-369103434b64-kube-api-access-bwq64" (OuterVolumeSpecName: "kube-api-access-bwq64") pod "5cc10041-704b-4b00-8e4e-369103434b64" (UID: "5cc10041-704b-4b00-8e4e-369103434b64"). InnerVolumeSpecName "kube-api-access-bwq64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.173529 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5cc10041-704b-4b00-8e4e-369103434b64" (UID: "5cc10041-704b-4b00-8e4e-369103434b64"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.174801 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d171ad-7d92-4c70-a686-65f60ded8a03-kube-api-access-2tqhd" (OuterVolumeSpecName: "kube-api-access-2tqhd") pod "44d171ad-7d92-4c70-a686-65f60ded8a03" (UID: "44d171ad-7d92-4c70-a686-65f60ded8a03"). InnerVolumeSpecName "kube-api-access-2tqhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.175520 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c96a703-e568-4916-8035-a951ae91dc2b-kube-api-access-rr725" (OuterVolumeSpecName: "kube-api-access-rr725") pod "8c96a703-e568-4916-8035-a951ae91dc2b" (UID: "8c96a703-e568-4916-8035-a951ae91dc2b"). InnerVolumeSpecName "kube-api-access-rr725". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.177413 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b9da973-6b5f-4485-adca-8792b0a3d256-kube-api-access-44jnw" (OuterVolumeSpecName: "kube-api-access-44jnw") pod "6b9da973-6b5f-4485-adca-8792b0a3d256" (UID: "6b9da973-6b5f-4485-adca-8792b0a3d256"). InnerVolumeSpecName "kube-api-access-44jnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.190632 4699 scope.go:117] "RemoveContainer" containerID="58ec65080ac68341b08f4272194fe62d85383a27766f002151749856e7c508e7" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.212413 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzgjj"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.213348 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c96a703-e568-4916-8035-a951ae91dc2b" (UID: "8c96a703-e568-4916-8035-a951ae91dc2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.217684 4699 scope.go:117] "RemoveContainer" containerID="7d653e44fd8d815b615ce9635176302fd8a0ad6d3f93420c0c7d85da3992bebc" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.224689 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mzgjj"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.233865 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" podStartSLOduration=2.233837938 podStartE2EDuration="2.233837938s" podCreationTimestamp="2026-02-26 11:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:18:28.224974652 +0000 UTC m=+454.035801106" watchObservedRunningTime="2026-02-26 11:18:28.233837938 +0000 UTC m=+454.044664372" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.244759 4699 scope.go:117] "RemoveContainer" containerID="e514effd43a8aac49eb2edbdb6959f6095c102c0f8bc4412986233930c5d5ff6" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.255649 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xhdp\" (UniqueName: \"kubernetes.io/projected/6e7ddf51-5522-4085-8567-76c9a254ed15-kube-api-access-7xhdp\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.255887 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.255920 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c96a703-e568-4916-8035-a951ae91dc2b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.255934 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwq64\" (UniqueName: \"kubernetes.io/projected/5cc10041-704b-4b00-8e4e-369103434b64-kube-api-access-bwq64\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.255947 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44jnw\" (UniqueName: \"kubernetes.io/projected/6b9da973-6b5f-4485-adca-8792b0a3d256-kube-api-access-44jnw\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.255959 4699 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.256030 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.256046 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.256059 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.256071 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqrqs\" (UniqueName: \"kubernetes.io/projected/ac0026c3-1fad-4b34-9c42-389971f0c773-kube-api-access-rqrqs\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.256084 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tqhd\" (UniqueName: \"kubernetes.io/projected/44d171ad-7d92-4c70-a686-65f60ded8a03-kube-api-access-2tqhd\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.256098 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.257211 4699 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cc10041-704b-4b00-8e4e-369103434b64-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.257243 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr725\" (UniqueName: \"kubernetes.io/projected/8c96a703-e568-4916-8035-a951ae91dc2b-kube-api-access-rr725\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.265159 4699 scope.go:117] "RemoveContainer" containerID="bca562e2d2fabc5097841d6398d6c6b6a6779605566f4dd3173111ee1e8c04f3" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.266458 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac0026c3-1fad-4b34-9c42-389971f0c773" (UID: "ac0026c3-1fad-4b34-9c42-389971f0c773"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.270775 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a83978-4f86-404b-967a-0e7493ff6721" path="/var/lib/kubelet/pods/71a83978-4f86-404b-967a-0e7493ff6721/volumes" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.273613 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" path="/var/lib/kubelet/pods/9ea10063-7888-400e-af1c-216cbde5a13e/volumes" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.282145 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e7ddf51-5522-4085-8567-76c9a254ed15" (UID: "6e7ddf51-5522-4085-8567-76c9a254ed15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.285524 4699 scope.go:117] "RemoveContainer" containerID="7480103b052e67e1c14af93c5ed9ab5b5c3150d0a1dbb5d35641a39bc2cc9515" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.331132 4699 scope.go:117] "RemoveContainer" containerID="0c88d150d726034804b09cdfd6ed7b9a516e4ecd807d5799c0ea12f3955c7b69" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.355820 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44d171ad-7d92-4c70-a686-65f60ded8a03" (UID: "44d171ad-7d92-4c70-a686-65f60ded8a03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.358984 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac0026c3-1fad-4b34-9c42-389971f0c773-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.359016 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7ddf51-5522-4085-8567-76c9a254ed15-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.359038 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44d171ad-7d92-4c70-a686-65f60ded8a03-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.362513 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b9da973-6b5f-4485-adca-8792b0a3d256" (UID: "6b9da973-6b5f-4485-adca-8792b0a3d256"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.372461 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cd5qf"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.375924 4699 scope.go:117] "RemoveContainer" containerID="4459df84e7aab7535bf4732238c87c4da5222e3237b69439fff20886a1ea7688" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.376314 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cd5qf"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.402489 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8kpz"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.405020 4699 scope.go:117] "RemoveContainer" containerID="c429ee05cb01901447a5e3bded424d4a0427e987ffd209a1f29754bcb9be9b4d" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.409196 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8kpz"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.423426 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jhgks"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.429240 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jhgks"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.442608 4699 scope.go:117] "RemoveContainer" containerID="e2ca3e75def51c6eedb622aaa6507c8da48849ebf241567dc8e903d48fc3a6e5" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.443527 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-czwkc"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.452263 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-czwkc"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.458750 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrk4n"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.459442 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b9da973-6b5f-4485-adca-8792b0a3d256-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.462107 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrk4n"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.462107 4699 scope.go:117] "RemoveContainer" containerID="b3876e98ecf09cf7959a550cdff441956e5f5211a25918e3b5b1ff888aa1faca" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.466733 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sc9c6"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.471540 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sc9c6"] Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.483003 4699 scope.go:117] "RemoveContainer" containerID="919888fa21cfe39704e1b0c864c73cd7cdeeac94e5ee1bb4c79246202be61323" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.498961 4699 scope.go:117] "RemoveContainer" containerID="39ff3a6e4269604cce0aea66db001b967d934c0076038e7958d8b015de9375a1" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.513816 4699 scope.go:117] "RemoveContainer" containerID="704c8ba25f50ac5c881bb9d05eb872ee3851c9a21d28c2acf7a27a400acbebe0" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.526737 4699 scope.go:117] "RemoveContainer" containerID="d27dda8ede66374aa47b77a60b930fa0b6c4e065e9c9b269dc3e8dd85fa02ece" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.547148 4699 scope.go:117] "RemoveContainer" containerID="0d415d903af1673dff3ecf368cade4c0a0a93c2b3158c0519393d68509c7e6d3" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.570311 4699 scope.go:117] "RemoveContainer" containerID="aa39a04716f8cdcc694931265437b30c1cb1c3615ab11016472ce3e95c18688b" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.585823 4699 scope.go:117] "RemoveContainer" containerID="e63934f65b729d4f1b8b668dbe9b4795f057f647c6b7a160c5e82634ad1de5fd" Feb 26 11:18:28 crc kubenswrapper[4699]: I0226 11:18:28.608402 4699 scope.go:117] "RemoveContainer" containerID="3255d554cf00b3f149c14b7b5562baa6c773b2f01ac34c99e514e81d89810bb1" Feb 26 11:18:29 crc kubenswrapper[4699]: I0226 11:18:29.117274 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nwbkq" Feb 26 11:18:30 crc kubenswrapper[4699]: I0226 11:18:30.268468 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" path="/var/lib/kubelet/pods/44d171ad-7d92-4c70-a686-65f60ded8a03/volumes" Feb 26 11:18:30 crc kubenswrapper[4699]: I0226 11:18:30.269363 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cc10041-704b-4b00-8e4e-369103434b64" path="/var/lib/kubelet/pods/5cc10041-704b-4b00-8e4e-369103434b64/volumes" Feb 26 11:18:30 crc kubenswrapper[4699]: I0226 11:18:30.269926 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" path="/var/lib/kubelet/pods/6b9da973-6b5f-4485-adca-8792b0a3d256/volumes" Feb 26 11:18:30 crc kubenswrapper[4699]: I0226 11:18:30.271175 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" path="/var/lib/kubelet/pods/6e7ddf51-5522-4085-8567-76c9a254ed15/volumes" Feb 26 11:18:30 crc kubenswrapper[4699]: I0226 11:18:30.271915 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" path="/var/lib/kubelet/pods/8c96a703-e568-4916-8035-a951ae91dc2b/volumes" Feb 26 11:18:30 crc kubenswrapper[4699]: I0226 11:18:30.273151 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" path="/var/lib/kubelet/pods/ac0026c3-1fad-4b34-9c42-389971f0c773/volumes" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.978200 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r555d"] Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.978895 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.978909 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.978917 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.978923 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.978933 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.978940 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.978946 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.978952 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.978961 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.978967 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.978974 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.978979 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.978988 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.978993 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979002 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a83978-4f86-404b-967a-0e7493ff6721" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979010 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a83978-4f86-404b-967a-0e7493ff6721" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979017 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979023 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979031 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc10041-704b-4b00-8e4e-369103434b64" containerName="marketplace-operator" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979037 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc10041-704b-4b00-8e4e-369103434b64" containerName="marketplace-operator" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979044 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979050 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979058 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979063 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979071 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979076 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979083 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979088 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979096 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979101 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979109 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979132 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979141 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a83978-4f86-404b-967a-0e7493ff6721" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979147 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a83978-4f86-404b-967a-0e7493ff6721" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979153 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a83978-4f86-404b-967a-0e7493ff6721" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979158 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a83978-4f86-404b-967a-0e7493ff6721" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979169 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979175 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979183 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979188 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979197 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979202 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" containerName="extract-content" Feb 26 11:18:36 crc kubenswrapper[4699]: E0226 11:18:36.979210 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979215 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" containerName="extract-utilities" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979303 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a83978-4f86-404b-967a-0e7493ff6721" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979312 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d171ad-7d92-4c70-a686-65f60ded8a03" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979320 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac0026c3-1fad-4b34-9c42-389971f0c773" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979326 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cc10041-704b-4b00-8e4e-369103434b64" containerName="marketplace-operator" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979332 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b9da973-6b5f-4485-adca-8792b0a3d256" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979339 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c96a703-e568-4916-8035-a951ae91dc2b" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979348 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7ddf51-5522-4085-8567-76c9a254ed15" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.979355 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ea10063-7888-400e-af1c-216cbde5a13e" containerName="registry-server" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.980077 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.982472 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 11:18:36 crc kubenswrapper[4699]: I0226 11:18:36.996364 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r555d"] Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.113442 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6tpd\" (UniqueName: \"kubernetes.io/projected/d174508d-e5d5-4912-a652-e7b264f1c882-kube-api-access-l6tpd\") pod \"redhat-marketplace-r555d\" (UID: \"d174508d-e5d5-4912-a652-e7b264f1c882\") " pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.113525 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d174508d-e5d5-4912-a652-e7b264f1c882-catalog-content\") pod \"redhat-marketplace-r555d\" (UID: \"d174508d-e5d5-4912-a652-e7b264f1c882\") " pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.113592 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d174508d-e5d5-4912-a652-e7b264f1c882-utilities\") pod \"redhat-marketplace-r555d\" (UID: \"d174508d-e5d5-4912-a652-e7b264f1c882\") " pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.179232 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xv8lg"] Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.180747 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.184354 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.189737 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xv8lg"] Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.214440 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d174508d-e5d5-4912-a652-e7b264f1c882-utilities\") pod \"redhat-marketplace-r555d\" (UID: \"d174508d-e5d5-4912-a652-e7b264f1c882\") " pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.214492 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6tpd\" (UniqueName: \"kubernetes.io/projected/d174508d-e5d5-4912-a652-e7b264f1c882-kube-api-access-l6tpd\") pod \"redhat-marketplace-r555d\" (UID: \"d174508d-e5d5-4912-a652-e7b264f1c882\") " pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.214546 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d174508d-e5d5-4912-a652-e7b264f1c882-catalog-content\") pod \"redhat-marketplace-r555d\" (UID: \"d174508d-e5d5-4912-a652-e7b264f1c882\") " pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.214996 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d174508d-e5d5-4912-a652-e7b264f1c882-utilities\") pod \"redhat-marketplace-r555d\" (UID: \"d174508d-e5d5-4912-a652-e7b264f1c882\") " pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.215070 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d174508d-e5d5-4912-a652-e7b264f1c882-catalog-content\") pod \"redhat-marketplace-r555d\" (UID: \"d174508d-e5d5-4912-a652-e7b264f1c882\") " pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.233402 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6tpd\" (UniqueName: \"kubernetes.io/projected/d174508d-e5d5-4912-a652-e7b264f1c882-kube-api-access-l6tpd\") pod \"redhat-marketplace-r555d\" (UID: \"d174508d-e5d5-4912-a652-e7b264f1c882\") " pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.295260 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.315848 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-catalog-content\") pod \"redhat-operators-xv8lg\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.315939 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-utilities\") pod \"redhat-operators-xv8lg\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.315955 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwr8w\" (UniqueName: \"kubernetes.io/projected/e84a1dbc-431c-4897-b5fd-f04460b7f943-kube-api-access-jwr8w\") pod \"redhat-operators-xv8lg\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.496534 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwr8w\" (UniqueName: \"kubernetes.io/projected/e84a1dbc-431c-4897-b5fd-f04460b7f943-kube-api-access-jwr8w\") pod \"redhat-operators-xv8lg\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.496742 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-utilities\") pod \"redhat-operators-xv8lg\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.496782 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-catalog-content\") pod \"redhat-operators-xv8lg\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.497264 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-catalog-content\") pod \"redhat-operators-xv8lg\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.497335 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-utilities\") pod \"redhat-operators-xv8lg\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.520646 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwr8w\" (UniqueName: \"kubernetes.io/projected/e84a1dbc-431c-4897-b5fd-f04460b7f943-kube-api-access-jwr8w\") pod \"redhat-operators-xv8lg\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.606900 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r555d"] Feb 26 11:18:37 crc kubenswrapper[4699]: I0226 11:18:37.810312 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:38 crc kubenswrapper[4699]: I0226 11:18:38.265759 4699 generic.go:334] "Generic (PLEG): container finished" podID="d174508d-e5d5-4912-a652-e7b264f1c882" containerID="c2143e2c5cc81b1899d2d5bc7fcd2c6e1c715acd804535ca070e55a22efaf376" exitCode=0 Feb 26 11:18:38 crc kubenswrapper[4699]: I0226 11:18:38.271536 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r555d" event={"ID":"d174508d-e5d5-4912-a652-e7b264f1c882","Type":"ContainerDied","Data":"c2143e2c5cc81b1899d2d5bc7fcd2c6e1c715acd804535ca070e55a22efaf376"} Feb 26 11:18:38 crc kubenswrapper[4699]: I0226 11:18:38.271578 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r555d" event={"ID":"d174508d-e5d5-4912-a652-e7b264f1c882","Type":"ContainerStarted","Data":"38f122ab998345047edb5b05c9d24d0513627c08f110dfa18f65cff552b1d59f"} Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.137742 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xv8lg"] Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.273158 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xv8lg" event={"ID":"e84a1dbc-431c-4897-b5fd-f04460b7f943","Type":"ContainerStarted","Data":"6d8c08def942c9655caee92b122902fc51271c1537ca60f7447fb09b383d1bcf"} Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.382025 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5vsj9"] Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.383996 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.388067 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.397108 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vsj9"] Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.496879 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23d2795-eec2-4e37-8902-7f9220e44cb1-catalog-content\") pod \"certified-operators-5vsj9\" (UID: \"a23d2795-eec2-4e37-8902-7f9220e44cb1\") " pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.497007 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23d2795-eec2-4e37-8902-7f9220e44cb1-utilities\") pod \"certified-operators-5vsj9\" (UID: \"a23d2795-eec2-4e37-8902-7f9220e44cb1\") " pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.497058 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlhn\" (UniqueName: \"kubernetes.io/projected/a23d2795-eec2-4e37-8902-7f9220e44cb1-kube-api-access-knlhn\") pod \"certified-operators-5vsj9\" (UID: \"a23d2795-eec2-4e37-8902-7f9220e44cb1\") " pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.581514 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hfvdf"] Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.583657 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.585698 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.592917 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfvdf"] Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.598063 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23d2795-eec2-4e37-8902-7f9220e44cb1-utilities\") pod \"certified-operators-5vsj9\" (UID: \"a23d2795-eec2-4e37-8902-7f9220e44cb1\") " pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.598123 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knlhn\" (UniqueName: \"kubernetes.io/projected/a23d2795-eec2-4e37-8902-7f9220e44cb1-kube-api-access-knlhn\") pod \"certified-operators-5vsj9\" (UID: \"a23d2795-eec2-4e37-8902-7f9220e44cb1\") " pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.598154 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23d2795-eec2-4e37-8902-7f9220e44cb1-catalog-content\") pod \"certified-operators-5vsj9\" (UID: \"a23d2795-eec2-4e37-8902-7f9220e44cb1\") " pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.598758 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23d2795-eec2-4e37-8902-7f9220e44cb1-utilities\") pod \"certified-operators-5vsj9\" (UID: \"a23d2795-eec2-4e37-8902-7f9220e44cb1\") " pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.598772 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23d2795-eec2-4e37-8902-7f9220e44cb1-catalog-content\") pod \"certified-operators-5vsj9\" (UID: \"a23d2795-eec2-4e37-8902-7f9220e44cb1\") " pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.620825 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knlhn\" (UniqueName: \"kubernetes.io/projected/a23d2795-eec2-4e37-8902-7f9220e44cb1-kube-api-access-knlhn\") pod \"certified-operators-5vsj9\" (UID: \"a23d2795-eec2-4e37-8902-7f9220e44cb1\") " pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.698939 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcflx\" (UniqueName: \"kubernetes.io/projected/fde9effb-9fa9-46a0-a8e6-08080ed0b8ba-kube-api-access-xcflx\") pod \"community-operators-hfvdf\" (UID: \"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba\") " pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.699254 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde9effb-9fa9-46a0-a8e6-08080ed0b8ba-utilities\") pod \"community-operators-hfvdf\" (UID: \"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba\") " pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.699274 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde9effb-9fa9-46a0-a8e6-08080ed0b8ba-catalog-content\") pod \"community-operators-hfvdf\" (UID: \"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba\") " pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.703347 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.800212 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcflx\" (UniqueName: \"kubernetes.io/projected/fde9effb-9fa9-46a0-a8e6-08080ed0b8ba-kube-api-access-xcflx\") pod \"community-operators-hfvdf\" (UID: \"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba\") " pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.800279 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde9effb-9fa9-46a0-a8e6-08080ed0b8ba-utilities\") pod \"community-operators-hfvdf\" (UID: \"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba\") " pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.800304 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde9effb-9fa9-46a0-a8e6-08080ed0b8ba-catalog-content\") pod \"community-operators-hfvdf\" (UID: \"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba\") " pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.800881 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde9effb-9fa9-46a0-a8e6-08080ed0b8ba-utilities\") pod \"community-operators-hfvdf\" (UID: \"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba\") " pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.801134 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde9effb-9fa9-46a0-a8e6-08080ed0b8ba-catalog-content\") pod \"community-operators-hfvdf\" (UID: \"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba\") " pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:39 crc kubenswrapper[4699]: I0226 11:18:39.839806 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcflx\" (UniqueName: \"kubernetes.io/projected/fde9effb-9fa9-46a0-a8e6-08080ed0b8ba-kube-api-access-xcflx\") pod \"community-operators-hfvdf\" (UID: \"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba\") " pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:40 crc kubenswrapper[4699]: I0226 11:18:40.064529 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:40 crc kubenswrapper[4699]: I0226 11:18:40.132345 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vsj9"] Feb 26 11:18:40 crc kubenswrapper[4699]: I0226 11:18:40.282359 4699 generic.go:334] "Generic (PLEG): container finished" podID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerID="388e074c3a65bb78e59f3313093bb2c909eaec385eec81e289a50a066aaab9b5" exitCode=0 Feb 26 11:18:40 crc kubenswrapper[4699]: I0226 11:18:40.282475 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xv8lg" event={"ID":"e84a1dbc-431c-4897-b5fd-f04460b7f943","Type":"ContainerDied","Data":"388e074c3a65bb78e59f3313093bb2c909eaec385eec81e289a50a066aaab9b5"} Feb 26 11:18:40 crc kubenswrapper[4699]: I0226 11:18:40.283979 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vsj9" event={"ID":"a23d2795-eec2-4e37-8902-7f9220e44cb1","Type":"ContainerStarted","Data":"bfe1b91bcedbbe9a51ded533b5d6175c6435fb9eeb1f1689657fa3d9850ba37f"} Feb 26 11:18:40 crc kubenswrapper[4699]: I0226 11:18:40.285910 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r555d" event={"ID":"d174508d-e5d5-4912-a652-e7b264f1c882","Type":"ContainerStarted","Data":"01b60bb8ecce1da4cf03c4ab1174ba5408fadff25b132bfc5331957adca04cdf"} Feb 26 11:18:40 crc kubenswrapper[4699]: I0226 11:18:40.315634 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfvdf"] Feb 26 11:18:40 crc kubenswrapper[4699]: W0226 11:18:40.385205 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfde9effb_9fa9_46a0_a8e6_08080ed0b8ba.slice/crio-e830db26342abc0c12262a99f12cecb9b41f22c38af368f13422f72dfaa8737b WatchSource:0}: Error finding container e830db26342abc0c12262a99f12cecb9b41f22c38af368f13422f72dfaa8737b: Status 404 returned error can't find the container with id e830db26342abc0c12262a99f12cecb9b41f22c38af368f13422f72dfaa8737b Feb 26 11:18:41 crc kubenswrapper[4699]: I0226 11:18:41.293517 4699 generic.go:334] "Generic (PLEG): container finished" podID="fde9effb-9fa9-46a0-a8e6-08080ed0b8ba" containerID="0b3c4cb225a0334e1da0494e85efeb36160c8802eabfdd450996c572adea01e2" exitCode=0 Feb 26 11:18:41 crc kubenswrapper[4699]: I0226 11:18:41.293848 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfvdf" event={"ID":"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba","Type":"ContainerDied","Data":"0b3c4cb225a0334e1da0494e85efeb36160c8802eabfdd450996c572adea01e2"} Feb 26 11:18:41 crc kubenswrapper[4699]: I0226 11:18:41.293878 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfvdf" event={"ID":"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba","Type":"ContainerStarted","Data":"e830db26342abc0c12262a99f12cecb9b41f22c38af368f13422f72dfaa8737b"} Feb 26 11:18:41 crc kubenswrapper[4699]: I0226 11:18:41.296952 4699 generic.go:334] "Generic (PLEG): container finished" podID="d174508d-e5d5-4912-a652-e7b264f1c882" containerID="01b60bb8ecce1da4cf03c4ab1174ba5408fadff25b132bfc5331957adca04cdf" exitCode=0 Feb 26 11:18:41 crc kubenswrapper[4699]: I0226 11:18:41.297022 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r555d" event={"ID":"d174508d-e5d5-4912-a652-e7b264f1c882","Type":"ContainerDied","Data":"01b60bb8ecce1da4cf03c4ab1174ba5408fadff25b132bfc5331957adca04cdf"} Feb 26 11:18:41 crc kubenswrapper[4699]: I0226 11:18:41.300141 4699 generic.go:334] "Generic (PLEG): container finished" podID="a23d2795-eec2-4e37-8902-7f9220e44cb1" containerID="49c756eff3fcf51adce9012c86148834634fc07c8832d17ca62ff816d2c86ae3" exitCode=0 Feb 26 11:18:41 crc kubenswrapper[4699]: I0226 11:18:41.300168 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vsj9" event={"ID":"a23d2795-eec2-4e37-8902-7f9220e44cb1","Type":"ContainerDied","Data":"49c756eff3fcf51adce9012c86148834634fc07c8832d17ca62ff816d2c86ae3"} Feb 26 11:18:41 crc kubenswrapper[4699]: I0226 11:18:41.586748 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:18:41 crc kubenswrapper[4699]: I0226 11:18:41.586826 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:18:42 crc kubenswrapper[4699]: I0226 11:18:42.307145 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xv8lg" event={"ID":"e84a1dbc-431c-4897-b5fd-f04460b7f943","Type":"ContainerStarted","Data":"b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4"} Feb 26 11:18:42 crc kubenswrapper[4699]: I0226 11:18:42.309615 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r555d" event={"ID":"d174508d-e5d5-4912-a652-e7b264f1c882","Type":"ContainerStarted","Data":"315e6d4c826d7007192db54c8b333215cec7a542897127c31fb01afbf5a995ac"} Feb 26 11:18:42 crc kubenswrapper[4699]: I0226 11:18:42.353333 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r555d" podStartSLOduration=2.777968489 podStartE2EDuration="6.353316901s" podCreationTimestamp="2026-02-26 11:18:36 +0000 UTC" firstStartedPulling="2026-02-26 11:18:38.26742501 +0000 UTC m=+464.078251444" lastFinishedPulling="2026-02-26 11:18:41.842773422 +0000 UTC m=+467.653599856" observedRunningTime="2026-02-26 11:18:42.351128227 +0000 UTC m=+468.161954681" watchObservedRunningTime="2026-02-26 11:18:42.353316901 +0000 UTC m=+468.164143335" Feb 26 11:18:43 crc kubenswrapper[4699]: I0226 11:18:43.330783 4699 generic.go:334] "Generic (PLEG): container finished" podID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerID="b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4" exitCode=0 Feb 26 11:18:43 crc kubenswrapper[4699]: I0226 11:18:43.331745 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xv8lg" event={"ID":"e84a1dbc-431c-4897-b5fd-f04460b7f943","Type":"ContainerDied","Data":"b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4"} Feb 26 11:18:43 crc kubenswrapper[4699]: I0226 11:18:43.335138 4699 generic.go:334] "Generic (PLEG): container finished" podID="fde9effb-9fa9-46a0-a8e6-08080ed0b8ba" containerID="44cffda81fe937c8fdf483f307285a80ac13d127d8ff22a73b84d040fdf0e363" exitCode=0 Feb 26 11:18:43 crc kubenswrapper[4699]: I0226 11:18:43.335360 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfvdf" event={"ID":"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba","Type":"ContainerDied","Data":"44cffda81fe937c8fdf483f307285a80ac13d127d8ff22a73b84d040fdf0e363"} Feb 26 11:18:44 crc kubenswrapper[4699]: I0226 11:18:44.345594 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vsj9" event={"ID":"a23d2795-eec2-4e37-8902-7f9220e44cb1","Type":"ContainerStarted","Data":"a4a2d76947fef9ec12842bfe0c7c36789ec7162e796e8641057b89f3c7cf4b88"} Feb 26 11:18:45 crc kubenswrapper[4699]: I0226 11:18:45.352195 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xv8lg" event={"ID":"e84a1dbc-431c-4897-b5fd-f04460b7f943","Type":"ContainerStarted","Data":"e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9"} Feb 26 11:18:45 crc kubenswrapper[4699]: I0226 11:18:45.354030 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" podUID="7232eb23-31ae-4e72-ae27-c256dc4cac9a" containerName="registry" containerID="cri-o://4779da011a858c6a8df3a7fdfbfb2e01a004c252953c916a440f89808caa4efd" gracePeriod=30 Feb 26 11:18:45 crc kubenswrapper[4699]: I0226 11:18:45.354419 4699 generic.go:334] "Generic (PLEG): container finished" podID="a23d2795-eec2-4e37-8902-7f9220e44cb1" containerID="a4a2d76947fef9ec12842bfe0c7c36789ec7162e796e8641057b89f3c7cf4b88" exitCode=0 Feb 26 11:18:45 crc kubenswrapper[4699]: I0226 11:18:45.354472 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vsj9" event={"ID":"a23d2795-eec2-4e37-8902-7f9220e44cb1","Type":"ContainerDied","Data":"a4a2d76947fef9ec12842bfe0c7c36789ec7162e796e8641057b89f3c7cf4b88"} Feb 26 11:18:45 crc kubenswrapper[4699]: I0226 11:18:45.358040 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfvdf" event={"ID":"fde9effb-9fa9-46a0-a8e6-08080ed0b8ba","Type":"ContainerStarted","Data":"eaf06ce3d6f30adf2fd276e186ccb2d44080d7c105c0c98ee38e551bb6793706"} Feb 26 11:18:45 crc kubenswrapper[4699]: I0226 11:18:45.732427 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xv8lg" podStartSLOduration=4.693495705 podStartE2EDuration="8.732405289s" podCreationTimestamp="2026-02-26 11:18:37 +0000 UTC" firstStartedPulling="2026-02-26 11:18:40.285895009 +0000 UTC m=+466.096721443" lastFinishedPulling="2026-02-26 11:18:44.324804583 +0000 UTC m=+470.135631027" observedRunningTime="2026-02-26 11:18:45.374641369 +0000 UTC m=+471.185467813" watchObservedRunningTime="2026-02-26 11:18:45.732405289 +0000 UTC m=+471.543231743" Feb 26 11:18:45 crc kubenswrapper[4699]: I0226 11:18:45.785931 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hfvdf" podStartSLOduration=3.63857007 podStartE2EDuration="6.785910179s" podCreationTimestamp="2026-02-26 11:18:39 +0000 UTC" firstStartedPulling="2026-02-26 11:18:41.29556515 +0000 UTC m=+467.106391594" lastFinishedPulling="2026-02-26 11:18:44.442905279 +0000 UTC m=+470.253731703" observedRunningTime="2026-02-26 11:18:45.777562822 +0000 UTC m=+471.588389256" watchObservedRunningTime="2026-02-26 11:18:45.785910179 +0000 UTC m=+471.596736623" Feb 26 11:18:47 crc kubenswrapper[4699]: I0226 11:18:47.295827 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:47 crc kubenswrapper[4699]: I0226 11:18:47.296168 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:47 crc kubenswrapper[4699]: I0226 11:18:47.355439 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:47 crc kubenswrapper[4699]: I0226 11:18:47.467166 4699 generic.go:334] "Generic (PLEG): container finished" podID="7232eb23-31ae-4e72-ae27-c256dc4cac9a" containerID="4779da011a858c6a8df3a7fdfbfb2e01a004c252953c916a440f89808caa4efd" exitCode=0 Feb 26 11:18:47 crc kubenswrapper[4699]: I0226 11:18:47.467261 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" event={"ID":"7232eb23-31ae-4e72-ae27-c256dc4cac9a","Type":"ContainerDied","Data":"4779da011a858c6a8df3a7fdfbfb2e01a004c252953c916a440f89808caa4efd"} Feb 26 11:18:47 crc kubenswrapper[4699]: I0226 11:18:47.503590 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r555d" Feb 26 11:18:47 crc kubenswrapper[4699]: I0226 11:18:47.811614 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:47 crc kubenswrapper[4699]: I0226 11:18:47.811690 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.471439 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.475938 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" event={"ID":"7232eb23-31ae-4e72-ae27-c256dc4cac9a","Type":"ContainerDied","Data":"39ab42bfea1ba6c0800a2508ff52b7eb12199142899ef006de5ffbee4f2135a3"} Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.475992 4699 scope.go:117] "RemoveContainer" containerID="4779da011a858c6a8df3a7fdfbfb2e01a004c252953c916a440f89808caa4efd" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.476006 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t8656" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.577869 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.577915 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7232eb23-31ae-4e72-ae27-c256dc4cac9a-ca-trust-extracted\") pod \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.577945 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-tls\") pod \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.577977 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trrf7\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-kube-api-access-trrf7\") pod \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.577996 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-certificates\") pod \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.578033 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7232eb23-31ae-4e72-ae27-c256dc4cac9a-installation-pull-secrets\") pod \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.578059 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-trusted-ca\") pod \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.578084 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-bound-sa-token\") pod \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\" (UID: \"7232eb23-31ae-4e72-ae27-c256dc4cac9a\") " Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.578866 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7232eb23-31ae-4e72-ae27-c256dc4cac9a" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.578903 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7232eb23-31ae-4e72-ae27-c256dc4cac9a" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.584142 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7232eb23-31ae-4e72-ae27-c256dc4cac9a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7232eb23-31ae-4e72-ae27-c256dc4cac9a" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.585508 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7232eb23-31ae-4e72-ae27-c256dc4cac9a" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.586794 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-kube-api-access-trrf7" (OuterVolumeSpecName: "kube-api-access-trrf7") pod "7232eb23-31ae-4e72-ae27-c256dc4cac9a" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a"). InnerVolumeSpecName "kube-api-access-trrf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.590519 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7232eb23-31ae-4e72-ae27-c256dc4cac9a" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.598803 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7232eb23-31ae-4e72-ae27-c256dc4cac9a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7232eb23-31ae-4e72-ae27-c256dc4cac9a" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.680177 4699 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.680210 4699 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.680221 4699 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7232eb23-31ae-4e72-ae27-c256dc4cac9a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.680229 4699 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.680237 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trrf7\" (UniqueName: \"kubernetes.io/projected/7232eb23-31ae-4e72-ae27-c256dc4cac9a-kube-api-access-trrf7\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.680247 4699 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7232eb23-31ae-4e72-ae27-c256dc4cac9a-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.680255 4699 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7232eb23-31ae-4e72-ae27-c256dc4cac9a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.773683 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7232eb23-31ae-4e72-ae27-c256dc4cac9a" (UID: "7232eb23-31ae-4e72-ae27-c256dc4cac9a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.805444 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8656"] Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.809272 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8656"] Feb 26 11:18:48 crc kubenswrapper[4699]: I0226 11:18:48.850566 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xv8lg" podUID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerName="registry-server" probeResult="failure" output=< Feb 26 11:18:48 crc kubenswrapper[4699]: timeout: failed to connect service ":50051" within 1s Feb 26 11:18:48 crc kubenswrapper[4699]: > Feb 26 11:18:50 crc kubenswrapper[4699]: I0226 11:18:50.065651 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:50 crc kubenswrapper[4699]: I0226 11:18:50.065964 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:50 crc kubenswrapper[4699]: I0226 11:18:50.108025 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:50 crc kubenswrapper[4699]: I0226 11:18:50.268414 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7232eb23-31ae-4e72-ae27-c256dc4cac9a" path="/var/lib/kubelet/pods/7232eb23-31ae-4e72-ae27-c256dc4cac9a/volumes" Feb 26 11:18:50 crc kubenswrapper[4699]: I0226 11:18:50.530029 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hfvdf" Feb 26 11:18:54 crc kubenswrapper[4699]: I0226 11:18:54.517327 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vsj9" event={"ID":"a23d2795-eec2-4e37-8902-7f9220e44cb1","Type":"ContainerStarted","Data":"216a021af4af58458326e2f9e398377f0f7c5a03927b8be1378311556dacc286"} Feb 26 11:18:54 crc kubenswrapper[4699]: I0226 11:18:54.533073 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5vsj9" podStartSLOduration=3.348663783 podStartE2EDuration="15.533054513s" podCreationTimestamp="2026-02-26 11:18:39 +0000 UTC" firstStartedPulling="2026-02-26 11:18:41.301809934 +0000 UTC m=+467.112636358" lastFinishedPulling="2026-02-26 11:18:53.486200654 +0000 UTC m=+479.297027088" observedRunningTime="2026-02-26 11:18:54.53159225 +0000 UTC m=+480.342418684" watchObservedRunningTime="2026-02-26 11:18:54.533054513 +0000 UTC m=+480.343880947" Feb 26 11:18:57 crc kubenswrapper[4699]: I0226 11:18:57.851647 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:57 crc kubenswrapper[4699]: I0226 11:18:57.898143 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:18:59 crc kubenswrapper[4699]: I0226 11:18:59.704473 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:59 crc kubenswrapper[4699]: I0226 11:18:59.704856 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:18:59 crc kubenswrapper[4699]: I0226 11:18:59.751010 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:19:00 crc kubenswrapper[4699]: I0226 11:19:00.589918 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5vsj9" Feb 26 11:19:11 crc kubenswrapper[4699]: I0226 11:19:11.585204 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:19:11 crc kubenswrapper[4699]: I0226 11:19:11.585730 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:19:11 crc kubenswrapper[4699]: I0226 11:19:11.585779 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:19:11 crc kubenswrapper[4699]: I0226 11:19:11.586367 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"650d424704999ccaef77ddc678846c35c1a480092b312ddf8beddcd52de6fa7e"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:19:11 crc kubenswrapper[4699]: I0226 11:19:11.586430 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://650d424704999ccaef77ddc678846c35c1a480092b312ddf8beddcd52de6fa7e" gracePeriod=600 Feb 26 11:19:12 crc kubenswrapper[4699]: I0226 11:19:12.617565 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="650d424704999ccaef77ddc678846c35c1a480092b312ddf8beddcd52de6fa7e" exitCode=0 Feb 26 11:19:12 crc kubenswrapper[4699]: I0226 11:19:12.617664 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"650d424704999ccaef77ddc678846c35c1a480092b312ddf8beddcd52de6fa7e"} Feb 26 11:19:12 crc kubenswrapper[4699]: I0226 11:19:12.618144 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"b71edd76e1595d983e68eaa39c03589da9abd360ecf74eeb3e44306707c89512"} Feb 26 11:19:12 crc kubenswrapper[4699]: I0226 11:19:12.618164 4699 scope.go:117] "RemoveContainer" containerID="0bdaf031a20943a74ba41b521a83b5b10a6051c6df8a453d647cb318131a97b4" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.132970 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535080-dcs8z"] Feb 26 11:20:00 crc kubenswrapper[4699]: E0226 11:20:00.133705 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7232eb23-31ae-4e72-ae27-c256dc4cac9a" containerName="registry" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.133717 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="7232eb23-31ae-4e72-ae27-c256dc4cac9a" containerName="registry" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.133808 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="7232eb23-31ae-4e72-ae27-c256dc4cac9a" containerName="registry" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.134189 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535080-dcs8z" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.136589 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.137623 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.137916 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.141626 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535080-dcs8z"] Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.304059 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6k6l\" (UniqueName: \"kubernetes.io/projected/c9ea4516-0708-4b4a-9dd5-75e6220a55d4-kube-api-access-p6k6l\") pod \"auto-csr-approver-29535080-dcs8z\" (UID: \"c9ea4516-0708-4b4a-9dd5-75e6220a55d4\") " pod="openshift-infra/auto-csr-approver-29535080-dcs8z" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.405365 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6k6l\" (UniqueName: \"kubernetes.io/projected/c9ea4516-0708-4b4a-9dd5-75e6220a55d4-kube-api-access-p6k6l\") pod \"auto-csr-approver-29535080-dcs8z\" (UID: \"c9ea4516-0708-4b4a-9dd5-75e6220a55d4\") " pod="openshift-infra/auto-csr-approver-29535080-dcs8z" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.428660 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6k6l\" (UniqueName: \"kubernetes.io/projected/c9ea4516-0708-4b4a-9dd5-75e6220a55d4-kube-api-access-p6k6l\") pod \"auto-csr-approver-29535080-dcs8z\" (UID: \"c9ea4516-0708-4b4a-9dd5-75e6220a55d4\") " pod="openshift-infra/auto-csr-approver-29535080-dcs8z" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.460500 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535080-dcs8z" Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.880407 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535080-dcs8z"] Feb 26 11:20:00 crc kubenswrapper[4699]: I0226 11:20:00.892522 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:20:01 crc kubenswrapper[4699]: I0226 11:20:01.887129 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535080-dcs8z" event={"ID":"c9ea4516-0708-4b4a-9dd5-75e6220a55d4","Type":"ContainerStarted","Data":"9290cf98df79f5c6f276fe0487d74b1e9d60f8b47efda86fef48a4429d494e23"} Feb 26 11:20:03 crc kubenswrapper[4699]: I0226 11:20:03.899715 4699 generic.go:334] "Generic (PLEG): container finished" podID="c9ea4516-0708-4b4a-9dd5-75e6220a55d4" containerID="ec18e4fa3c26a9a3b620eb9c167811e69c8b0db26c298c317aa409e857f17f0c" exitCode=0 Feb 26 11:20:03 crc kubenswrapper[4699]: I0226 11:20:03.899776 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535080-dcs8z" event={"ID":"c9ea4516-0708-4b4a-9dd5-75e6220a55d4","Type":"ContainerDied","Data":"ec18e4fa3c26a9a3b620eb9c167811e69c8b0db26c298c317aa409e857f17f0c"} Feb 26 11:20:05 crc kubenswrapper[4699]: I0226 11:20:05.152850 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535080-dcs8z" Feb 26 11:20:05 crc kubenswrapper[4699]: I0226 11:20:05.266316 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6k6l\" (UniqueName: \"kubernetes.io/projected/c9ea4516-0708-4b4a-9dd5-75e6220a55d4-kube-api-access-p6k6l\") pod \"c9ea4516-0708-4b4a-9dd5-75e6220a55d4\" (UID: \"c9ea4516-0708-4b4a-9dd5-75e6220a55d4\") " Feb 26 11:20:05 crc kubenswrapper[4699]: I0226 11:20:05.273415 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ea4516-0708-4b4a-9dd5-75e6220a55d4-kube-api-access-p6k6l" (OuterVolumeSpecName: "kube-api-access-p6k6l") pod "c9ea4516-0708-4b4a-9dd5-75e6220a55d4" (UID: "c9ea4516-0708-4b4a-9dd5-75e6220a55d4"). InnerVolumeSpecName "kube-api-access-p6k6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:20:05 crc kubenswrapper[4699]: I0226 11:20:05.367906 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6k6l\" (UniqueName: \"kubernetes.io/projected/c9ea4516-0708-4b4a-9dd5-75e6220a55d4-kube-api-access-p6k6l\") on node \"crc\" DevicePath \"\"" Feb 26 11:20:05 crc kubenswrapper[4699]: I0226 11:20:05.914490 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535080-dcs8z" event={"ID":"c9ea4516-0708-4b4a-9dd5-75e6220a55d4","Type":"ContainerDied","Data":"9290cf98df79f5c6f276fe0487d74b1e9d60f8b47efda86fef48a4429d494e23"} Feb 26 11:20:05 crc kubenswrapper[4699]: I0226 11:20:05.914540 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9290cf98df79f5c6f276fe0487d74b1e9d60f8b47efda86fef48a4429d494e23" Feb 26 11:20:05 crc kubenswrapper[4699]: I0226 11:20:05.914619 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535080-dcs8z" Feb 26 11:20:06 crc kubenswrapper[4699]: I0226 11:20:06.205082 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535074-bjfld"] Feb 26 11:20:06 crc kubenswrapper[4699]: I0226 11:20:06.211519 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535074-bjfld"] Feb 26 11:20:06 crc kubenswrapper[4699]: I0226 11:20:06.268026 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d444da-9127-459c-97c6-cdcff5b20e67" path="/var/lib/kubelet/pods/30d444da-9127-459c-97c6-cdcff5b20e67/volumes" Feb 26 11:20:09 crc kubenswrapper[4699]: I0226 11:20:09.630917 4699 patch_prober.go:28] interesting pod/oauth-openshift-f54c45747-bbg8s container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:20:09 crc kubenswrapper[4699]: I0226 11:20:09.633574 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" podUID="c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:20:09 crc kubenswrapper[4699]: I0226 11:20:09.635845 4699 patch_prober.go:28] interesting pod/oauth-openshift-f54c45747-bbg8s container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:20:09 crc kubenswrapper[4699]: I0226 11:20:09.635912 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-f54c45747-bbg8s" podUID="c1d0f1cb-3dbe-4e38-ad2d-579f02a34f3a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:20:20 crc kubenswrapper[4699]: I0226 11:20:20.763864 4699 scope.go:117] "RemoveContainer" containerID="52ffe1a540a589fb575f8cfc11cab09c8b7aa57c3ace31541c3b66e087bf8460" Feb 26 11:21:11 crc kubenswrapper[4699]: I0226 11:21:11.584640 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:21:11 crc kubenswrapper[4699]: I0226 11:21:11.585264 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:21:41 crc kubenswrapper[4699]: I0226 11:21:41.585073 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:21:41 crc kubenswrapper[4699]: I0226 11:21:41.585658 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.134082 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535082-2l88q"] Feb 26 11:22:00 crc kubenswrapper[4699]: E0226 11:22:00.134835 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ea4516-0708-4b4a-9dd5-75e6220a55d4" containerName="oc" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.134849 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ea4516-0708-4b4a-9dd5-75e6220a55d4" containerName="oc" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.135007 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ea4516-0708-4b4a-9dd5-75e6220a55d4" containerName="oc" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.135558 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535082-2l88q" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.137276 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.137313 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.137340 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.144466 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535082-2l88q"] Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.259520 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc7k4\" (UniqueName: \"kubernetes.io/projected/b96109ee-edc2-496a-b6bc-cffad5fb9a40-kube-api-access-sc7k4\") pod \"auto-csr-approver-29535082-2l88q\" (UID: \"b96109ee-edc2-496a-b6bc-cffad5fb9a40\") " pod="openshift-infra/auto-csr-approver-29535082-2l88q" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.360519 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc7k4\" (UniqueName: \"kubernetes.io/projected/b96109ee-edc2-496a-b6bc-cffad5fb9a40-kube-api-access-sc7k4\") pod \"auto-csr-approver-29535082-2l88q\" (UID: \"b96109ee-edc2-496a-b6bc-cffad5fb9a40\") " pod="openshift-infra/auto-csr-approver-29535082-2l88q" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.380237 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc7k4\" (UniqueName: \"kubernetes.io/projected/b96109ee-edc2-496a-b6bc-cffad5fb9a40-kube-api-access-sc7k4\") pod \"auto-csr-approver-29535082-2l88q\" (UID: \"b96109ee-edc2-496a-b6bc-cffad5fb9a40\") " pod="openshift-infra/auto-csr-approver-29535082-2l88q" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.455168 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535082-2l88q" Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.696988 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535082-2l88q"] Feb 26 11:22:00 crc kubenswrapper[4699]: I0226 11:22:00.966108 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535082-2l88q" event={"ID":"b96109ee-edc2-496a-b6bc-cffad5fb9a40","Type":"ContainerStarted","Data":"af90e1811d7b4178074ee9001855fddd8d93df324958be18cb5138e0bbeaca19"} Feb 26 11:22:01 crc kubenswrapper[4699]: I0226 11:22:01.974234 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535082-2l88q" event={"ID":"b96109ee-edc2-496a-b6bc-cffad5fb9a40","Type":"ContainerStarted","Data":"13f8b1b98d014497027ee7037eac5f0ce1bbfdb9879bcfae0154cb4a61717ad1"} Feb 26 11:22:02 crc kubenswrapper[4699]: I0226 11:22:02.983125 4699 generic.go:334] "Generic (PLEG): container finished" podID="b96109ee-edc2-496a-b6bc-cffad5fb9a40" containerID="13f8b1b98d014497027ee7037eac5f0ce1bbfdb9879bcfae0154cb4a61717ad1" exitCode=0 Feb 26 11:22:02 crc kubenswrapper[4699]: I0226 11:22:02.983179 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535082-2l88q" event={"ID":"b96109ee-edc2-496a-b6bc-cffad5fb9a40","Type":"ContainerDied","Data":"13f8b1b98d014497027ee7037eac5f0ce1bbfdb9879bcfae0154cb4a61717ad1"} Feb 26 11:22:04 crc kubenswrapper[4699]: I0226 11:22:04.169169 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535082-2l88q" Feb 26 11:22:04 crc kubenswrapper[4699]: I0226 11:22:04.310059 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc7k4\" (UniqueName: \"kubernetes.io/projected/b96109ee-edc2-496a-b6bc-cffad5fb9a40-kube-api-access-sc7k4\") pod \"b96109ee-edc2-496a-b6bc-cffad5fb9a40\" (UID: \"b96109ee-edc2-496a-b6bc-cffad5fb9a40\") " Feb 26 11:22:04 crc kubenswrapper[4699]: I0226 11:22:04.315054 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96109ee-edc2-496a-b6bc-cffad5fb9a40-kube-api-access-sc7k4" (OuterVolumeSpecName: "kube-api-access-sc7k4") pod "b96109ee-edc2-496a-b6bc-cffad5fb9a40" (UID: "b96109ee-edc2-496a-b6bc-cffad5fb9a40"). InnerVolumeSpecName "kube-api-access-sc7k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:22:04 crc kubenswrapper[4699]: I0226 11:22:04.412011 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc7k4\" (UniqueName: \"kubernetes.io/projected/b96109ee-edc2-496a-b6bc-cffad5fb9a40-kube-api-access-sc7k4\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:04 crc kubenswrapper[4699]: I0226 11:22:04.995782 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535082-2l88q" event={"ID":"b96109ee-edc2-496a-b6bc-cffad5fb9a40","Type":"ContainerDied","Data":"af90e1811d7b4178074ee9001855fddd8d93df324958be18cb5138e0bbeaca19"} Feb 26 11:22:04 crc kubenswrapper[4699]: I0226 11:22:04.995824 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535082-2l88q" Feb 26 11:22:04 crc kubenswrapper[4699]: I0226 11:22:04.995827 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af90e1811d7b4178074ee9001855fddd8d93df324958be18cb5138e0bbeaca19" Feb 26 11:22:05 crc kubenswrapper[4699]: I0226 11:22:05.031193 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535076-rv9x5"] Feb 26 11:22:05 crc kubenswrapper[4699]: I0226 11:22:05.034790 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535076-rv9x5"] Feb 26 11:22:06 crc kubenswrapper[4699]: I0226 11:22:06.268199 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9d78c8-4193-47a8-9ed9-208f6dc25831" path="/var/lib/kubelet/pods/0d9d78c8-4193-47a8-9ed9-208f6dc25831/volumes" Feb 26 11:22:11 crc kubenswrapper[4699]: I0226 11:22:11.584849 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:22:11 crc kubenswrapper[4699]: I0226 11:22:11.585231 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:22:11 crc kubenswrapper[4699]: I0226 11:22:11.585276 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:22:11 crc kubenswrapper[4699]: I0226 11:22:11.585891 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b71edd76e1595d983e68eaa39c03589da9abd360ecf74eeb3e44306707c89512"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:22:11 crc kubenswrapper[4699]: I0226 11:22:11.585947 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://b71edd76e1595d983e68eaa39c03589da9abd360ecf74eeb3e44306707c89512" gracePeriod=600 Feb 26 11:22:12 crc kubenswrapper[4699]: I0226 11:22:12.056890 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="b71edd76e1595d983e68eaa39c03589da9abd360ecf74eeb3e44306707c89512" exitCode=0 Feb 26 11:22:12 crc kubenswrapper[4699]: I0226 11:22:12.056945 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"b71edd76e1595d983e68eaa39c03589da9abd360ecf74eeb3e44306707c89512"} Feb 26 11:22:12 crc kubenswrapper[4699]: I0226 11:22:12.057014 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"bb4262ffa74d3c4cd8ca9d3a4ee81267fb75459ec0f5e9e96d1dd3934b8627ca"} Feb 26 11:22:12 crc kubenswrapper[4699]: I0226 11:22:12.057038 4699 scope.go:117] "RemoveContainer" containerID="650d424704999ccaef77ddc678846c35c1a480092b312ddf8beddcd52de6fa7e" Feb 26 11:22:20 crc kubenswrapper[4699]: I0226 11:22:20.810217 4699 scope.go:117] "RemoveContainer" containerID="576debb0d3d58f5281816cda92fedce6f78492ddc1301cf006959585594f82b9" Feb 26 11:22:20 crc kubenswrapper[4699]: I0226 11:22:20.832868 4699 scope.go:117] "RemoveContainer" containerID="19a60f72e3a64feb9f04d813b42f9a20a08e1ed258c497a9b61b68ad603f4b5b" Feb 26 11:22:20 crc kubenswrapper[4699]: I0226 11:22:20.868013 4699 scope.go:117] "RemoveContainer" containerID="b8eedef066fa8aaa6df130360e7ae91b6c35c65386cf0e1eb331ae24b87e6305" Feb 26 11:22:20 crc kubenswrapper[4699]: I0226 11:22:20.880712 4699 scope.go:117] "RemoveContainer" containerID="000757444f955626a5cade194e8afdfce85b9f484def8b4bc1703641245c47c3" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.146875 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535084-h8xlt"] Feb 26 11:24:00 crc kubenswrapper[4699]: E0226 11:24:00.147703 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96109ee-edc2-496a-b6bc-cffad5fb9a40" containerName="oc" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.147719 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96109ee-edc2-496a-b6bc-cffad5fb9a40" containerName="oc" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.147835 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96109ee-edc2-496a-b6bc-cffad5fb9a40" containerName="oc" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.148330 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535084-h8xlt" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.150731 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.151033 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.151459 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.155999 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535084-h8xlt"] Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.181229 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97pv\" (UniqueName: \"kubernetes.io/projected/98d6d072-33c5-4660-b6c3-80344c215e6a-kube-api-access-f97pv\") pod \"auto-csr-approver-29535084-h8xlt\" (UID: \"98d6d072-33c5-4660-b6c3-80344c215e6a\") " pod="openshift-infra/auto-csr-approver-29535084-h8xlt" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.282419 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f97pv\" (UniqueName: \"kubernetes.io/projected/98d6d072-33c5-4660-b6c3-80344c215e6a-kube-api-access-f97pv\") pod \"auto-csr-approver-29535084-h8xlt\" (UID: \"98d6d072-33c5-4660-b6c3-80344c215e6a\") " pod="openshift-infra/auto-csr-approver-29535084-h8xlt" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.304562 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f97pv\" (UniqueName: \"kubernetes.io/projected/98d6d072-33c5-4660-b6c3-80344c215e6a-kube-api-access-f97pv\") pod \"auto-csr-approver-29535084-h8xlt\" (UID: \"98d6d072-33c5-4660-b6c3-80344c215e6a\") " pod="openshift-infra/auto-csr-approver-29535084-h8xlt" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.466903 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535084-h8xlt" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.672779 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535084-h8xlt"] Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.737275 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dswxp"] Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.739771 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dswxp" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.745800 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.746066 4699 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vg6rm" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.746256 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.753973 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dswxp"] Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.759537 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-fhn2n"] Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.760240 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fhn2n" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.853591 4699 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-9d424" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.853685 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzvc8\" (UniqueName: \"kubernetes.io/projected/f026799a-39c7-443e-9801-f046ba8ae94b-kube-api-access-rzvc8\") pod \"cert-manager-cainjector-cf98fcc89-dswxp\" (UID: \"f026799a-39c7-443e-9801-f046ba8ae94b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dswxp" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.853750 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k4gj\" (UniqueName: \"kubernetes.io/projected/fc42522b-c5f4-4df2-8435-3e3985dd960c-kube-api-access-2k4gj\") pod \"cert-manager-858654f9db-fhn2n\" (UID: \"fc42522b-c5f4-4df2-8435-3e3985dd960c\") " pod="cert-manager/cert-manager-858654f9db-fhn2n" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.886160 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l2fdt"] Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.887247 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-l2fdt" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.890062 4699 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2qr8t" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.895775 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fhn2n"] Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.899448 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l2fdt"] Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.910047 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535084-h8xlt" event={"ID":"98d6d072-33c5-4660-b6c3-80344c215e6a","Type":"ContainerStarted","Data":"b06895e9f19dc046adcb983ba655ea56046b63cab9016b1a0bc760d4d3b03db8"} Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.954869 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k4gj\" (UniqueName: \"kubernetes.io/projected/fc42522b-c5f4-4df2-8435-3e3985dd960c-kube-api-access-2k4gj\") pod \"cert-manager-858654f9db-fhn2n\" (UID: \"fc42522b-c5f4-4df2-8435-3e3985dd960c\") " pod="cert-manager/cert-manager-858654f9db-fhn2n" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.954972 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzvc8\" (UniqueName: \"kubernetes.io/projected/f026799a-39c7-443e-9801-f046ba8ae94b-kube-api-access-rzvc8\") pod \"cert-manager-cainjector-cf98fcc89-dswxp\" (UID: \"f026799a-39c7-443e-9801-f046ba8ae94b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dswxp" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.972244 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzvc8\" (UniqueName: \"kubernetes.io/projected/f026799a-39c7-443e-9801-f046ba8ae94b-kube-api-access-rzvc8\") pod \"cert-manager-cainjector-cf98fcc89-dswxp\" (UID: \"f026799a-39c7-443e-9801-f046ba8ae94b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dswxp" Feb 26 11:24:00 crc kubenswrapper[4699]: I0226 11:24:00.973128 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k4gj\" (UniqueName: \"kubernetes.io/projected/fc42522b-c5f4-4df2-8435-3e3985dd960c-kube-api-access-2k4gj\") pod \"cert-manager-858654f9db-fhn2n\" (UID: \"fc42522b-c5f4-4df2-8435-3e3985dd960c\") " pod="cert-manager/cert-manager-858654f9db-fhn2n" Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.055779 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9c48\" (UniqueName: \"kubernetes.io/projected/fad1f923-b22c-4c0d-9eb9-684636bc76c0-kube-api-access-x9c48\") pod \"cert-manager-webhook-687f57d79b-l2fdt\" (UID: \"fad1f923-b22c-4c0d-9eb9-684636bc76c0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l2fdt" Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.157781 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9c48\" (UniqueName: \"kubernetes.io/projected/fad1f923-b22c-4c0d-9eb9-684636bc76c0-kube-api-access-x9c48\") pod \"cert-manager-webhook-687f57d79b-l2fdt\" (UID: \"fad1f923-b22c-4c0d-9eb9-684636bc76c0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l2fdt" Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.164180 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dswxp" Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.174983 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9c48\" (UniqueName: \"kubernetes.io/projected/fad1f923-b22c-4c0d-9eb9-684636bc76c0-kube-api-access-x9c48\") pod \"cert-manager-webhook-687f57d79b-l2fdt\" (UID: \"fad1f923-b22c-4c0d-9eb9-684636bc76c0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l2fdt" Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.186390 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fhn2n" Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.213217 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-l2fdt" Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.396520 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dswxp"] Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.442190 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fhn2n"] Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.507376 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l2fdt"] Feb 26 11:24:01 crc kubenswrapper[4699]: W0226 11:24:01.512510 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfad1f923_b22c_4c0d_9eb9_684636bc76c0.slice/crio-61b5767bf2cc0e2aeb7053618a452eb32aa80ba4f35dc6bbadfca4095e4ef427 WatchSource:0}: Error finding container 61b5767bf2cc0e2aeb7053618a452eb32aa80ba4f35dc6bbadfca4095e4ef427: Status 404 returned error can't find the container with id 61b5767bf2cc0e2aeb7053618a452eb32aa80ba4f35dc6bbadfca4095e4ef427 Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.918337 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-l2fdt" event={"ID":"fad1f923-b22c-4c0d-9eb9-684636bc76c0","Type":"ContainerStarted","Data":"61b5767bf2cc0e2aeb7053618a452eb32aa80ba4f35dc6bbadfca4095e4ef427"} Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.919245 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fhn2n" event={"ID":"fc42522b-c5f4-4df2-8435-3e3985dd960c","Type":"ContainerStarted","Data":"0770a7f4946f9eacb522a566fd126b99a9173b2f6a09dffbab49aae7b55670e8"} Feb 26 11:24:01 crc kubenswrapper[4699]: I0226 11:24:01.920085 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dswxp" event={"ID":"f026799a-39c7-443e-9801-f046ba8ae94b","Type":"ContainerStarted","Data":"6d69bca4bcae0b85c4b685d53076413e159390406d3114e3cba2c6dd09d6006d"} Feb 26 11:24:04 crc kubenswrapper[4699]: I0226 11:24:04.936249 4699 generic.go:334] "Generic (PLEG): container finished" podID="98d6d072-33c5-4660-b6c3-80344c215e6a" containerID="dd76d54940753753e3f7a2683a8c241e99cd1928bc9d5ed547595d83c46f6f57" exitCode=0 Feb 26 11:24:04 crc kubenswrapper[4699]: I0226 11:24:04.936359 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535084-h8xlt" event={"ID":"98d6d072-33c5-4660-b6c3-80344c215e6a","Type":"ContainerDied","Data":"dd76d54940753753e3f7a2683a8c241e99cd1928bc9d5ed547595d83c46f6f57"} Feb 26 11:24:06 crc kubenswrapper[4699]: I0226 11:24:06.167804 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535084-h8xlt" Feb 26 11:24:06 crc kubenswrapper[4699]: I0226 11:24:06.324182 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f97pv\" (UniqueName: \"kubernetes.io/projected/98d6d072-33c5-4660-b6c3-80344c215e6a-kube-api-access-f97pv\") pod \"98d6d072-33c5-4660-b6c3-80344c215e6a\" (UID: \"98d6d072-33c5-4660-b6c3-80344c215e6a\") " Feb 26 11:24:06 crc kubenswrapper[4699]: I0226 11:24:06.330281 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d6d072-33c5-4660-b6c3-80344c215e6a-kube-api-access-f97pv" (OuterVolumeSpecName: "kube-api-access-f97pv") pod "98d6d072-33c5-4660-b6c3-80344c215e6a" (UID: "98d6d072-33c5-4660-b6c3-80344c215e6a"). InnerVolumeSpecName "kube-api-access-f97pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:24:06 crc kubenswrapper[4699]: I0226 11:24:06.425157 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f97pv\" (UniqueName: \"kubernetes.io/projected/98d6d072-33c5-4660-b6c3-80344c215e6a-kube-api-access-f97pv\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:06 crc kubenswrapper[4699]: I0226 11:24:06.949823 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535084-h8xlt" event={"ID":"98d6d072-33c5-4660-b6c3-80344c215e6a","Type":"ContainerDied","Data":"b06895e9f19dc046adcb983ba655ea56046b63cab9016b1a0bc760d4d3b03db8"} Feb 26 11:24:06 crc kubenswrapper[4699]: I0226 11:24:06.950472 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b06895e9f19dc046adcb983ba655ea56046b63cab9016b1a0bc760d4d3b03db8" Feb 26 11:24:06 crc kubenswrapper[4699]: I0226 11:24:06.949873 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535084-h8xlt" Feb 26 11:24:07 crc kubenswrapper[4699]: I0226 11:24:07.220743 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535078-ktbp9"] Feb 26 11:24:07 crc kubenswrapper[4699]: I0226 11:24:07.223555 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535078-ktbp9"] Feb 26 11:24:08 crc kubenswrapper[4699]: I0226 11:24:08.269540 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c181d85-a2e5-4771-a5a7-6cdd1f944012" path="/var/lib/kubelet/pods/4c181d85-a2e5-4771-a5a7-6cdd1f944012/volumes" Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.492937 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cw6vx"] Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.493909 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovn-controller" containerID="cri-o://74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f" gracePeriod=30 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.503003 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c" gracePeriod=30 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.503183 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="nbdb" containerID="cri-o://5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f" gracePeriod=30 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.503237 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="northd" containerID="cri-o://e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f" gracePeriod=30 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.504240 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovn-acl-logging" containerID="cri-o://bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f" gracePeriod=30 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.504313 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="kube-rbac-proxy-node" containerID="cri-o://d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38" gracePeriod=30 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.504356 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="sbdb" containerID="cri-o://8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f" gracePeriod=30 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.548557 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" containerID="cri-o://674b1ddc9ce52057921afe22948e78b0ac743b734851b7422144e06a6bedf770" gracePeriod=30 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.973299 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovnkube-controller/2.log" Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.976105 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovn-acl-logging/0.log" Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.976653 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovn-controller/0.log" Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.977147 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="674b1ddc9ce52057921afe22948e78b0ac743b734851b7422144e06a6bedf770" exitCode=0 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.977258 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f" exitCode=0 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.977341 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f" exitCode=0 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.977429 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f" exitCode=0 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.977516 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c" exitCode=0 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.977608 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38" exitCode=0 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.977744 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f" exitCode=143 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.977813 4699 generic.go:334] "Generic (PLEG): container finished" podID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerID="74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f" exitCode=143 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.977781 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"674b1ddc9ce52057921afe22948e78b0ac743b734851b7422144e06a6bedf770"} Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.977944 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f"} Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.978001 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f"} Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.978039 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f"} Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.978050 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c"} Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.978061 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38"} Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.978070 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f"} Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.978079 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f"} Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.978096 4699 scope.go:117] "RemoveContainer" containerID="063e6f60e4a1aae569cb37aa15eefe27977691a15f63d85e697b606d5a1771d0" Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.979817 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2k6b7_32ce77d1-5287-4674-aeda-810070efbb29/kube-multus/1.log" Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.980380 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2k6b7_32ce77d1-5287-4674-aeda-810070efbb29/kube-multus/0.log" Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.980588 4699 generic.go:334] "Generic (PLEG): container finished" podID="32ce77d1-5287-4674-aeda-810070efbb29" containerID="143a97abf6e80c5d27a74181526e16c9b98e3306181c3568beb75b7c14de4b31" exitCode=2 Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.980671 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2k6b7" event={"ID":"32ce77d1-5287-4674-aeda-810070efbb29","Type":"ContainerDied","Data":"143a97abf6e80c5d27a74181526e16c9b98e3306181c3568beb75b7c14de4b31"} Feb 26 11:24:10 crc kubenswrapper[4699]: I0226 11:24:10.981244 4699 scope.go:117] "RemoveContainer" containerID="143a97abf6e80c5d27a74181526e16c9b98e3306181c3568beb75b7c14de4b31" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.585567 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.585915 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.685700 4699 scope.go:117] "RemoveContainer" containerID="b60c607682186fb1f56b927c4ef394f4d6f9e9d14be5c7d13b1b52d84541aee3" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.815767 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovn-acl-logging/0.log" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.816270 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovn-controller/0.log" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.816781 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.878630 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4v2nm"] Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.878937 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="kubecfg-setup" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.878964 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="kubecfg-setup" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.878983 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.878995 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.879009 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879021 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.879034 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879044 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.879062 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="sbdb" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879073 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="sbdb" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.879091 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="northd" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879102 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="northd" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.879138 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovn-acl-logging" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879149 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovn-acl-logging" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.879163 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="nbdb" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879174 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="nbdb" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.879194 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="kube-rbac-proxy-node" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879204 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="kube-rbac-proxy-node" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.879218 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879228 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.879243 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d6d072-33c5-4660-b6c3-80344c215e6a" containerName="oc" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879254 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d6d072-33c5-4660-b6c3-80344c215e6a" containerName="oc" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.879267 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovn-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879277 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovn-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879421 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879438 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879450 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="nbdb" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879464 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="kube-rbac-proxy-node" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879481 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovn-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879497 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879509 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d6d072-33c5-4660-b6c3-80344c215e6a" containerName="oc" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879522 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovn-acl-logging" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879540 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="northd" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879557 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879572 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="sbdb" Feb 26 11:24:11 crc kubenswrapper[4699]: E0226 11:24:11.879735 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879749 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.879884 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" containerName="ovnkube-controller" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.884167 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.895811 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-node-log\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.895892 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-node-log" (OuterVolumeSpecName: "node-log") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896042 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-run-ovn-kubernetes\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896067 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-node-log\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896095 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e225412-9202-42a4-8244-7a8a6355fcaf-ovnkube-config\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896137 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-kubelet\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896164 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e225412-9202-42a4-8244-7a8a6355fcaf-env-overrides\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896195 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-slash\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896224 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-var-lib-openvswitch\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896244 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e225412-9202-42a4-8244-7a8a6355fcaf-ovn-node-metrics-cert\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896266 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-run-openvswitch\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896292 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-cni-netd\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896387 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg59h\" (UniqueName: \"kubernetes.io/projected/6e225412-9202-42a4-8244-7a8a6355fcaf-kube-api-access-hg59h\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896532 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-systemd-units\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896680 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-run-netns\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896747 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-cni-bin\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896789 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-etc-openvswitch\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896825 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-run-systemd\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896867 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-log-socket\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896882 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e225412-9202-42a4-8244-7a8a6355fcaf-ovnkube-script-lib\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896923 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.896953 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-run-ovn\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.897029 4699 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-node-log\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.989248 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovn-acl-logging/0.log" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.990191 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cw6vx_cd12b2df-7af6-45bc-88e7-d5e5e6451e65/ovn-controller/0.log" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.990555 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" event={"ID":"cd12b2df-7af6-45bc-88e7-d5e5e6451e65","Type":"ContainerDied","Data":"90b46f5a3e61ec03394a2be7ff4739209b903f31912a7a66807fca0693899985"} Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.990603 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cw6vx" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.990641 4699 scope.go:117] "RemoveContainer" containerID="674b1ddc9ce52057921afe22948e78b0ac743b734851b7422144e06a6bedf770" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.992591 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2k6b7_32ce77d1-5287-4674-aeda-810070efbb29/kube-multus/1.log" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.992662 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2k6b7" event={"ID":"32ce77d1-5287-4674-aeda-810070efbb29","Type":"ContainerStarted","Data":"c0b2606aa6761275edf27264b0d44368ad12c528dde0ab91e2f612830847c483"} Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997274 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-kubelet\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997331 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-config\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997353 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovn-node-metrics-cert\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997366 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-openvswitch\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997383 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnmg2\" (UniqueName: \"kubernetes.io/projected/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-kube-api-access-tnmg2\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997402 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-ovn-kubernetes\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997416 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-systemd-units\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997433 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-log-socket\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997455 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997462 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997558 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997541 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997475 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-env-overrides\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997608 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997640 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-log-socket" (OuterVolumeSpecName: "log-socket") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997641 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-slash\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997666 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-slash" (OuterVolumeSpecName: "host-slash") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997691 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997699 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-systemd\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997744 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-script-lib\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997767 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-ovn\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997794 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-var-lib-openvswitch\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997836 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-netd\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997864 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-netns\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997897 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-etc-openvswitch\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997918 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-bin\") pod \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\" (UID: \"cd12b2df-7af6-45bc-88e7-d5e5e6451e65\") " Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.997983 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998009 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998040 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998024 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998064 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998075 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998085 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998091 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-cni-bin\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998128 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998163 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-etc-openvswitch\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998171 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-cni-bin\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998196 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-run-systemd\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998230 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-etc-openvswitch\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998274 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-log-socket\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998298 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e225412-9202-42a4-8244-7a8a6355fcaf-ovnkube-script-lib\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998353 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998361 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-run-systemd\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998378 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-run-ovn\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998386 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-log-socket\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998400 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-run-ovn-kubernetes\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998420 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-node-log\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998426 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998435 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998449 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-run-ovn-kubernetes\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998461 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-node-log\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998494 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-run-ovn\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998519 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e225412-9202-42a4-8244-7a8a6355fcaf-ovnkube-config\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998553 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-kubelet\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998616 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e225412-9202-42a4-8244-7a8a6355fcaf-env-overrides\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998648 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-slash\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998701 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-var-lib-openvswitch\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998726 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e225412-9202-42a4-8244-7a8a6355fcaf-ovn-node-metrics-cert\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998652 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-kubelet\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998757 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-run-openvswitch\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998790 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-cni-netd\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998817 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg59h\" (UniqueName: \"kubernetes.io/projected/6e225412-9202-42a4-8244-7a8a6355fcaf-kube-api-access-hg59h\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998844 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-systemd-units\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998873 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-run-netns\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998945 4699 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998961 4699 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998972 4699 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998982 4699 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.998993 4699 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999005 4699 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999015 4699 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999027 4699 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-log-socket\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999039 4699 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999050 4699 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999060 4699 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-slash\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999072 4699 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999082 4699 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999093 4699 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999104 4699 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999138 4699 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999173 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-run-netns\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999187 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e225412-9202-42a4-8244-7a8a6355fcaf-ovnkube-script-lib\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999215 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-cni-netd\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999253 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-var-lib-openvswitch\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999346 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e225412-9202-42a4-8244-7a8a6355fcaf-ovnkube-config\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999389 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-host-slash\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999493 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-systemd-units\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:11 crc kubenswrapper[4699]: I0226 11:24:11.999529 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e225412-9202-42a4-8244-7a8a6355fcaf-run-openvswitch\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:11.999895 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e225412-9202-42a4-8244-7a8a6355fcaf-env-overrides\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.003668 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.003720 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e225412-9202-42a4-8244-7a8a6355fcaf-ovn-node-metrics-cert\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.004060 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-kube-api-access-tnmg2" (OuterVolumeSpecName: "kube-api-access-tnmg2") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "kube-api-access-tnmg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.014004 4699 scope.go:117] "RemoveContainer" containerID="8eeb219af62ecea95e9f4c720e0a14e4e1e1040fdd269f558f7f41fae5ee962f" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.016879 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg59h\" (UniqueName: \"kubernetes.io/projected/6e225412-9202-42a4-8244-7a8a6355fcaf-kube-api-access-hg59h\") pod \"ovnkube-node-4v2nm\" (UID: \"6e225412-9202-42a4-8244-7a8a6355fcaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.017495 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cd12b2df-7af6-45bc-88e7-d5e5e6451e65" (UID: "cd12b2df-7af6-45bc-88e7-d5e5e6451e65"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.027972 4699 scope.go:117] "RemoveContainer" containerID="5e753f4df3718432af139b5f02ab082262c2c8291cde6e4c137e9ff0ccc5d99f" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.041724 4699 scope.go:117] "RemoveContainer" containerID="e01dd23c87a8712fd245585cd433a7a1866c51b962b1fda3443d9a7769b4c39f" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.053916 4699 scope.go:117] "RemoveContainer" containerID="fca4b97859dbd31955adbd54e5f1367a778a5c3cf0090aadf13b78bd1a22730c" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.067538 4699 scope.go:117] "RemoveContainer" containerID="d2a8ccfd692d3ac3cdee1225316983c92d15802aa700defa7e85a3addf9e1f38" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.080544 4699 scope.go:117] "RemoveContainer" containerID="bdb9d3ff849ad635bd6190a474ab109a5f6bcefcc7a0f2f16b94487089d13e6f" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.094295 4699 scope.go:117] "RemoveContainer" containerID="74540ecf8d835bb553c16798af12c9682a5bd4609120308ba6fc6a0b9aa1f75f" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.100618 4699 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.100647 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnmg2\" (UniqueName: \"kubernetes.io/projected/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-kube-api-access-tnmg2\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.100659 4699 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd12b2df-7af6-45bc-88e7-d5e5e6451e65-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.110819 4699 scope.go:117] "RemoveContainer" containerID="f0570cc3dc8f007e88024df69cf743715a318df16fb782800259eccc698cb124" Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.201641 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:12 crc kubenswrapper[4699]: W0226 11:24:12.219232 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e225412_9202_42a4_8244_7a8a6355fcaf.slice/crio-ead38c998cc2a72439d65b79bc2d7dae518f48af2797c07c59c43124b08d6bf3 WatchSource:0}: Error finding container ead38c998cc2a72439d65b79bc2d7dae518f48af2797c07c59c43124b08d6bf3: Status 404 returned error can't find the container with id ead38c998cc2a72439d65b79bc2d7dae518f48af2797c07c59c43124b08d6bf3 Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.342738 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cw6vx"] Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.347056 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cw6vx"] Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.997714 4699 generic.go:334] "Generic (PLEG): container finished" podID="6e225412-9202-42a4-8244-7a8a6355fcaf" containerID="b5583008b20615d2641e157ba520e6ef11eaf1ee067d0e0367d39e9785af9683" exitCode=0 Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.997783 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" event={"ID":"6e225412-9202-42a4-8244-7a8a6355fcaf","Type":"ContainerDied","Data":"b5583008b20615d2641e157ba520e6ef11eaf1ee067d0e0367d39e9785af9683"} Feb 26 11:24:12 crc kubenswrapper[4699]: I0226 11:24:12.997809 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" event={"ID":"6e225412-9202-42a4-8244-7a8a6355fcaf","Type":"ContainerStarted","Data":"ead38c998cc2a72439d65b79bc2d7dae518f48af2797c07c59c43124b08d6bf3"} Feb 26 11:24:14 crc kubenswrapper[4699]: I0226 11:24:14.268093 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd12b2df-7af6-45bc-88e7-d5e5e6451e65" path="/var/lib/kubelet/pods/cd12b2df-7af6-45bc-88e7-d5e5e6451e65/volumes" Feb 26 11:24:15 crc kubenswrapper[4699]: I0226 11:24:15.016974 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" event={"ID":"6e225412-9202-42a4-8244-7a8a6355fcaf","Type":"ContainerStarted","Data":"e5228953a471c5bf62e601425e4a4e1e7a9bca3e8a2fb987d42d53ac18cbf41b"} Feb 26 11:24:15 crc kubenswrapper[4699]: I0226 11:24:15.017020 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" event={"ID":"6e225412-9202-42a4-8244-7a8a6355fcaf","Type":"ContainerStarted","Data":"4d44ebcb4b5934d350c6342c1ef712a9e604aa7b1b7e888c21af059eb4020c57"} Feb 26 11:24:15 crc kubenswrapper[4699]: I0226 11:24:15.017035 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" event={"ID":"6e225412-9202-42a4-8244-7a8a6355fcaf","Type":"ContainerStarted","Data":"b53670f2a1d77ad666ea86f6f35f3bf9efd756f4b62bcad6358d429572d280b1"} Feb 26 11:24:15 crc kubenswrapper[4699]: I0226 11:24:15.017048 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" event={"ID":"6e225412-9202-42a4-8244-7a8a6355fcaf","Type":"ContainerStarted","Data":"8736e52a22701dfef846f9f69791fa56f45c49d6832f030bdd8c30aaeef58f44"} Feb 26 11:24:15 crc kubenswrapper[4699]: I0226 11:24:15.017059 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" event={"ID":"6e225412-9202-42a4-8244-7a8a6355fcaf","Type":"ContainerStarted","Data":"b39d19c3f4b39eca0df8cfaa18890e0ac5f460252a98b91a74533005ff326381"} Feb 26 11:24:15 crc kubenswrapper[4699]: I0226 11:24:15.816188 4699 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 11:24:16 crc kubenswrapper[4699]: I0226 11:24:16.026035 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" event={"ID":"6e225412-9202-42a4-8244-7a8a6355fcaf","Type":"ContainerStarted","Data":"66408311a440f996a585906d6b0bb296c0978e4365f541be1cd36119bf734d2e"} Feb 26 11:24:18 crc kubenswrapper[4699]: I0226 11:24:18.040544 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" event={"ID":"6e225412-9202-42a4-8244-7a8a6355fcaf","Type":"ContainerStarted","Data":"44ea8bcccfbbd1e0439ea7b2c30686c35af18b0f333d640e283ef0d40ee573c1"} Feb 26 11:24:18 crc kubenswrapper[4699]: I0226 11:24:18.043441 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-l2fdt" event={"ID":"fad1f923-b22c-4c0d-9eb9-684636bc76c0","Type":"ContainerStarted","Data":"601a1f9bc395916dc73e3649a9402bb2463b9927ed19700fe8b4858c159ddc6a"} Feb 26 11:24:18 crc kubenswrapper[4699]: I0226 11:24:18.043538 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-l2fdt" Feb 26 11:24:18 crc kubenswrapper[4699]: I0226 11:24:18.044993 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fhn2n" event={"ID":"fc42522b-c5f4-4df2-8435-3e3985dd960c","Type":"ContainerStarted","Data":"e3515b76794064e529f97d98d4b61fc037a56271092bcbd9a727eeab2b391225"} Feb 26 11:24:18 crc kubenswrapper[4699]: I0226 11:24:18.046832 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dswxp" event={"ID":"f026799a-39c7-443e-9801-f046ba8ae94b","Type":"ContainerStarted","Data":"59d440b27262243190a759fd9d6f8c7c9f604fcdff40437906a0f3452c0c3b79"} Feb 26 11:24:18 crc kubenswrapper[4699]: I0226 11:24:18.062297 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-l2fdt" podStartSLOduration=2.513520577 podStartE2EDuration="18.062278471s" podCreationTimestamp="2026-02-26 11:24:00 +0000 UTC" firstStartedPulling="2026-02-26 11:24:01.515616206 +0000 UTC m=+787.326442640" lastFinishedPulling="2026-02-26 11:24:17.06437411 +0000 UTC m=+802.875200534" observedRunningTime="2026-02-26 11:24:18.059911702 +0000 UTC m=+803.870738136" watchObservedRunningTime="2026-02-26 11:24:18.062278471 +0000 UTC m=+803.873104925" Feb 26 11:24:18 crc kubenswrapper[4699]: I0226 11:24:18.076734 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-fhn2n" podStartSLOduration=2.48237991 podStartE2EDuration="18.076719532s" podCreationTimestamp="2026-02-26 11:24:00 +0000 UTC" firstStartedPulling="2026-02-26 11:24:01.463441886 +0000 UTC m=+787.274268320" lastFinishedPulling="2026-02-26 11:24:17.057781508 +0000 UTC m=+802.868607942" observedRunningTime="2026-02-26 11:24:18.074205049 +0000 UTC m=+803.885031493" watchObservedRunningTime="2026-02-26 11:24:18.076719532 +0000 UTC m=+803.887545966" Feb 26 11:24:20 crc kubenswrapper[4699]: I0226 11:24:20.060822 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" event={"ID":"6e225412-9202-42a4-8244-7a8a6355fcaf","Type":"ContainerStarted","Data":"ae018b0b119c23ef3fe1b58334318bb0b0101e5d55a9aef36c1acb828b434379"} Feb 26 11:24:20 crc kubenswrapper[4699]: I0226 11:24:20.061227 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:20 crc kubenswrapper[4699]: I0226 11:24:20.061246 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:20 crc kubenswrapper[4699]: I0226 11:24:20.061258 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:20 crc kubenswrapper[4699]: I0226 11:24:20.086037 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:20 crc kubenswrapper[4699]: I0226 11:24:20.088367 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dswxp" podStartSLOduration=4.465270038 podStartE2EDuration="20.088354167s" podCreationTimestamp="2026-02-26 11:24:00 +0000 UTC" firstStartedPulling="2026-02-26 11:24:01.419830826 +0000 UTC m=+787.230657250" lastFinishedPulling="2026-02-26 11:24:17.042914955 +0000 UTC m=+802.853741379" observedRunningTime="2026-02-26 11:24:18.089739181 +0000 UTC m=+803.900565615" watchObservedRunningTime="2026-02-26 11:24:20.088354167 +0000 UTC m=+805.899180601" Feb 26 11:24:20 crc kubenswrapper[4699]: I0226 11:24:20.089498 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" podStartSLOduration=9.0894927 podStartE2EDuration="9.0894927s" podCreationTimestamp="2026-02-26 11:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:24:20.087274545 +0000 UTC m=+805.898100989" watchObservedRunningTime="2026-02-26 11:24:20.0894927 +0000 UTC m=+805.900319134" Feb 26 11:24:20 crc kubenswrapper[4699]: I0226 11:24:20.091575 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:24:20 crc kubenswrapper[4699]: I0226 11:24:20.939048 4699 scope.go:117] "RemoveContainer" containerID="1eda56a25e25c14621838f63ba6ea80e65461406feb4a8836fe9fda800de7616" Feb 26 11:24:26 crc kubenswrapper[4699]: I0226 11:24:26.215331 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-l2fdt" Feb 26 11:24:41 crc kubenswrapper[4699]: I0226 11:24:41.584768 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:24:41 crc kubenswrapper[4699]: I0226 11:24:41.585359 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:24:42 crc kubenswrapper[4699]: I0226 11:24:42.225431 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4v2nm" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.268415 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5"] Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.270021 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.273158 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.278653 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5"] Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.456415 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.456501 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.456547 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hcm2\" (UniqueName: \"kubernetes.io/projected/a0751c34-68ec-4fd1-821f-94e314dd5621-kube-api-access-7hcm2\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.558108 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hcm2\" (UniqueName: \"kubernetes.io/projected/a0751c34-68ec-4fd1-821f-94e314dd5621-kube-api-access-7hcm2\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.558224 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.558268 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.558971 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.559341 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.578285 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hcm2\" (UniqueName: \"kubernetes.io/projected/a0751c34-68ec-4fd1-821f-94e314dd5621-kube-api-access-7hcm2\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:08 crc kubenswrapper[4699]: I0226 11:25:08.594002 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:09 crc kubenswrapper[4699]: I0226 11:25:09.084933 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5"] Feb 26 11:25:09 crc kubenswrapper[4699]: I0226 11:25:09.324722 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" event={"ID":"a0751c34-68ec-4fd1-821f-94e314dd5621","Type":"ContainerStarted","Data":"b9b833dc0ad614a27fa356e39b9a35f75196680f8a3d5999e5f99c8756fcb337"} Feb 26 11:25:09 crc kubenswrapper[4699]: I0226 11:25:09.324766 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" event={"ID":"a0751c34-68ec-4fd1-821f-94e314dd5621","Type":"ContainerStarted","Data":"8e60f093fadd5b8f255fd685567fcc1049fd5ee2845fd36a5cbe4b8a83c5a17f"} Feb 26 11:25:09 crc kubenswrapper[4699]: I0226 11:25:09.811410 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wqmqz"] Feb 26 11:25:09 crc kubenswrapper[4699]: I0226 11:25:09.812690 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:09 crc kubenswrapper[4699]: I0226 11:25:09.822891 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wqmqz"] Feb 26 11:25:09 crc kubenswrapper[4699]: I0226 11:25:09.976974 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a69df934-4fa7-472d-abe7-8fa4ec5d4296-utilities\") pod \"redhat-operators-wqmqz\" (UID: \"a69df934-4fa7-472d-abe7-8fa4ec5d4296\") " pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:09 crc kubenswrapper[4699]: I0226 11:25:09.977616 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqzrw\" (UniqueName: \"kubernetes.io/projected/a69df934-4fa7-472d-abe7-8fa4ec5d4296-kube-api-access-bqzrw\") pod \"redhat-operators-wqmqz\" (UID: \"a69df934-4fa7-472d-abe7-8fa4ec5d4296\") " pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:09 crc kubenswrapper[4699]: I0226 11:25:09.977682 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a69df934-4fa7-472d-abe7-8fa4ec5d4296-catalog-content\") pod \"redhat-operators-wqmqz\" (UID: \"a69df934-4fa7-472d-abe7-8fa4ec5d4296\") " pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:10 crc kubenswrapper[4699]: I0226 11:25:10.079063 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a69df934-4fa7-472d-abe7-8fa4ec5d4296-utilities\") pod \"redhat-operators-wqmqz\" (UID: \"a69df934-4fa7-472d-abe7-8fa4ec5d4296\") " pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:10 crc kubenswrapper[4699]: I0226 11:25:10.079174 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqzrw\" (UniqueName: \"kubernetes.io/projected/a69df934-4fa7-472d-abe7-8fa4ec5d4296-kube-api-access-bqzrw\") pod \"redhat-operators-wqmqz\" (UID: \"a69df934-4fa7-472d-abe7-8fa4ec5d4296\") " pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:10 crc kubenswrapper[4699]: I0226 11:25:10.079221 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a69df934-4fa7-472d-abe7-8fa4ec5d4296-catalog-content\") pod \"redhat-operators-wqmqz\" (UID: \"a69df934-4fa7-472d-abe7-8fa4ec5d4296\") " pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:10 crc kubenswrapper[4699]: I0226 11:25:10.080027 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a69df934-4fa7-472d-abe7-8fa4ec5d4296-utilities\") pod \"redhat-operators-wqmqz\" (UID: \"a69df934-4fa7-472d-abe7-8fa4ec5d4296\") " pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:10 crc kubenswrapper[4699]: I0226 11:25:10.080080 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a69df934-4fa7-472d-abe7-8fa4ec5d4296-catalog-content\") pod \"redhat-operators-wqmqz\" (UID: \"a69df934-4fa7-472d-abe7-8fa4ec5d4296\") " pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:10 crc kubenswrapper[4699]: I0226 11:25:10.108728 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqzrw\" (UniqueName: \"kubernetes.io/projected/a69df934-4fa7-472d-abe7-8fa4ec5d4296-kube-api-access-bqzrw\") pod \"redhat-operators-wqmqz\" (UID: \"a69df934-4fa7-472d-abe7-8fa4ec5d4296\") " pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:10 crc kubenswrapper[4699]: I0226 11:25:10.131357 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:10 crc kubenswrapper[4699]: I0226 11:25:10.335313 4699 generic.go:334] "Generic (PLEG): container finished" podID="a0751c34-68ec-4fd1-821f-94e314dd5621" containerID="b9b833dc0ad614a27fa356e39b9a35f75196680f8a3d5999e5f99c8756fcb337" exitCode=0 Feb 26 11:25:10 crc kubenswrapper[4699]: I0226 11:25:10.335365 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" event={"ID":"a0751c34-68ec-4fd1-821f-94e314dd5621","Type":"ContainerDied","Data":"b9b833dc0ad614a27fa356e39b9a35f75196680f8a3d5999e5f99c8756fcb337"} Feb 26 11:25:10 crc kubenswrapper[4699]: I0226 11:25:10.337013 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:25:10 crc kubenswrapper[4699]: I0226 11:25:10.468185 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wqmqz"] Feb 26 11:25:10 crc kubenswrapper[4699]: W0226 11:25:10.476169 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda69df934_4fa7_472d_abe7_8fa4ec5d4296.slice/crio-6f28cea8ca9217cfb71b838533ff5198ca00a3639bdd1298a79995ee2e3cd5ff WatchSource:0}: Error finding container 6f28cea8ca9217cfb71b838533ff5198ca00a3639bdd1298a79995ee2e3cd5ff: Status 404 returned error can't find the container with id 6f28cea8ca9217cfb71b838533ff5198ca00a3639bdd1298a79995ee2e3cd5ff Feb 26 11:25:11 crc kubenswrapper[4699]: I0226 11:25:11.341538 4699 generic.go:334] "Generic (PLEG): container finished" podID="a69df934-4fa7-472d-abe7-8fa4ec5d4296" containerID="97d5eea8e56120b9332d89f62e107ab0fa56bbf98f0f8c93efb08ea22a900d84" exitCode=0 Feb 26 11:25:11 crc kubenswrapper[4699]: I0226 11:25:11.341584 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqmqz" event={"ID":"a69df934-4fa7-472d-abe7-8fa4ec5d4296","Type":"ContainerDied","Data":"97d5eea8e56120b9332d89f62e107ab0fa56bbf98f0f8c93efb08ea22a900d84"} Feb 26 11:25:11 crc kubenswrapper[4699]: I0226 11:25:11.341610 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqmqz" event={"ID":"a69df934-4fa7-472d-abe7-8fa4ec5d4296","Type":"ContainerStarted","Data":"6f28cea8ca9217cfb71b838533ff5198ca00a3639bdd1298a79995ee2e3cd5ff"} Feb 26 11:25:11 crc kubenswrapper[4699]: I0226 11:25:11.585445 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:25:11 crc kubenswrapper[4699]: I0226 11:25:11.585729 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:25:11 crc kubenswrapper[4699]: I0226 11:25:11.585832 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:25:11 crc kubenswrapper[4699]: I0226 11:25:11.586521 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb4262ffa74d3c4cd8ca9d3a4ee81267fb75459ec0f5e9e96d1dd3934b8627ca"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:25:11 crc kubenswrapper[4699]: I0226 11:25:11.586674 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://bb4262ffa74d3c4cd8ca9d3a4ee81267fb75459ec0f5e9e96d1dd3934b8627ca" gracePeriod=600 Feb 26 11:25:12 crc kubenswrapper[4699]: I0226 11:25:12.349529 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="bb4262ffa74d3c4cd8ca9d3a4ee81267fb75459ec0f5e9e96d1dd3934b8627ca" exitCode=0 Feb 26 11:25:12 crc kubenswrapper[4699]: I0226 11:25:12.349579 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"bb4262ffa74d3c4cd8ca9d3a4ee81267fb75459ec0f5e9e96d1dd3934b8627ca"} Feb 26 11:25:12 crc kubenswrapper[4699]: I0226 11:25:12.350145 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"119837a96f7eb017f5f7e56268e9cf0e4a17276f8f8dd21ae8a57f4864ea4cf7"} Feb 26 11:25:12 crc kubenswrapper[4699]: I0226 11:25:12.350172 4699 scope.go:117] "RemoveContainer" containerID="b71edd76e1595d983e68eaa39c03589da9abd360ecf74eeb3e44306707c89512" Feb 26 11:25:13 crc kubenswrapper[4699]: I0226 11:25:13.360700 4699 generic.go:334] "Generic (PLEG): container finished" podID="a0751c34-68ec-4fd1-821f-94e314dd5621" containerID="0133aa6fda56a47cc137b971e9f9e5b35387818b4f9389af34a3ab9ed0a72a2e" exitCode=0 Feb 26 11:25:13 crc kubenswrapper[4699]: I0226 11:25:13.360791 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" event={"ID":"a0751c34-68ec-4fd1-821f-94e314dd5621","Type":"ContainerDied","Data":"0133aa6fda56a47cc137b971e9f9e5b35387818b4f9389af34a3ab9ed0a72a2e"} Feb 26 11:25:14 crc kubenswrapper[4699]: I0226 11:25:14.374191 4699 generic.go:334] "Generic (PLEG): container finished" podID="a0751c34-68ec-4fd1-821f-94e314dd5621" containerID="1081701f4f8e2b852fa913e23a40fca64b36b6412291cd67cb93addd8c21658d" exitCode=0 Feb 26 11:25:14 crc kubenswrapper[4699]: I0226 11:25:14.374266 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" event={"ID":"a0751c34-68ec-4fd1-821f-94e314dd5621","Type":"ContainerDied","Data":"1081701f4f8e2b852fa913e23a40fca64b36b6412291cd67cb93addd8c21658d"} Feb 26 11:25:15 crc kubenswrapper[4699]: I0226 11:25:15.734711 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:15 crc kubenswrapper[4699]: I0226 11:25:15.865282 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-util\") pod \"a0751c34-68ec-4fd1-821f-94e314dd5621\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " Feb 26 11:25:15 crc kubenswrapper[4699]: I0226 11:25:15.865343 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hcm2\" (UniqueName: \"kubernetes.io/projected/a0751c34-68ec-4fd1-821f-94e314dd5621-kube-api-access-7hcm2\") pod \"a0751c34-68ec-4fd1-821f-94e314dd5621\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " Feb 26 11:25:15 crc kubenswrapper[4699]: I0226 11:25:15.865378 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-bundle\") pod \"a0751c34-68ec-4fd1-821f-94e314dd5621\" (UID: \"a0751c34-68ec-4fd1-821f-94e314dd5621\") " Feb 26 11:25:15 crc kubenswrapper[4699]: I0226 11:25:15.866485 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-bundle" (OuterVolumeSpecName: "bundle") pod "a0751c34-68ec-4fd1-821f-94e314dd5621" (UID: "a0751c34-68ec-4fd1-821f-94e314dd5621"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:25:15 crc kubenswrapper[4699]: I0226 11:25:15.875858 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0751c34-68ec-4fd1-821f-94e314dd5621-kube-api-access-7hcm2" (OuterVolumeSpecName: "kube-api-access-7hcm2") pod "a0751c34-68ec-4fd1-821f-94e314dd5621" (UID: "a0751c34-68ec-4fd1-821f-94e314dd5621"). InnerVolumeSpecName "kube-api-access-7hcm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:25:15 crc kubenswrapper[4699]: I0226 11:25:15.876611 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-util" (OuterVolumeSpecName: "util") pod "a0751c34-68ec-4fd1-821f-94e314dd5621" (UID: "a0751c34-68ec-4fd1-821f-94e314dd5621"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:25:15 crc kubenswrapper[4699]: I0226 11:25:15.966516 4699 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:15 crc kubenswrapper[4699]: I0226 11:25:15.966553 4699 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0751c34-68ec-4fd1-821f-94e314dd5621-util\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:15 crc kubenswrapper[4699]: I0226 11:25:15.966562 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hcm2\" (UniqueName: \"kubernetes.io/projected/a0751c34-68ec-4fd1-821f-94e314dd5621-kube-api-access-7hcm2\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:16 crc kubenswrapper[4699]: I0226 11:25:16.386361 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" event={"ID":"a0751c34-68ec-4fd1-821f-94e314dd5621","Type":"ContainerDied","Data":"8e60f093fadd5b8f255fd685567fcc1049fd5ee2845fd36a5cbe4b8a83c5a17f"} Feb 26 11:25:16 crc kubenswrapper[4699]: I0226 11:25:16.386401 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e60f093fadd5b8f255fd685567fcc1049fd5ee2845fd36a5cbe4b8a83c5a17f" Feb 26 11:25:16 crc kubenswrapper[4699]: I0226 11:25:16.386459 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.735564 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-8l8n8"] Feb 26 11:25:17 crc kubenswrapper[4699]: E0226 11:25:17.736073 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0751c34-68ec-4fd1-821f-94e314dd5621" containerName="extract" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.736092 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0751c34-68ec-4fd1-821f-94e314dd5621" containerName="extract" Feb 26 11:25:17 crc kubenswrapper[4699]: E0226 11:25:17.736135 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0751c34-68ec-4fd1-821f-94e314dd5621" containerName="util" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.736145 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0751c34-68ec-4fd1-821f-94e314dd5621" containerName="util" Feb 26 11:25:17 crc kubenswrapper[4699]: E0226 11:25:17.736159 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0751c34-68ec-4fd1-821f-94e314dd5621" containerName="pull" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.736176 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0751c34-68ec-4fd1-821f-94e314dd5621" containerName="pull" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.736286 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0751c34-68ec-4fd1-821f-94e314dd5621" containerName="extract" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.736638 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8l8n8" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.740900 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.741274 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9krck" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.741435 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.745804 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-8l8n8"] Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.801272 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ff56\" (UniqueName: \"kubernetes.io/projected/15312afe-49aa-4681-8513-6ed9c774d222-kube-api-access-8ff56\") pod \"nmstate-operator-75c5dccd6c-8l8n8\" (UID: \"15312afe-49aa-4681-8513-6ed9c774d222\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8l8n8" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.902744 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ff56\" (UniqueName: \"kubernetes.io/projected/15312afe-49aa-4681-8513-6ed9c774d222-kube-api-access-8ff56\") pod \"nmstate-operator-75c5dccd6c-8l8n8\" (UID: \"15312afe-49aa-4681-8513-6ed9c774d222\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8l8n8" Feb 26 11:25:17 crc kubenswrapper[4699]: I0226 11:25:17.920843 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ff56\" (UniqueName: \"kubernetes.io/projected/15312afe-49aa-4681-8513-6ed9c774d222-kube-api-access-8ff56\") pod \"nmstate-operator-75c5dccd6c-8l8n8\" (UID: \"15312afe-49aa-4681-8513-6ed9c774d222\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8l8n8" Feb 26 11:25:18 crc kubenswrapper[4699]: I0226 11:25:18.057330 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8l8n8" Feb 26 11:25:22 crc kubenswrapper[4699]: I0226 11:25:22.575399 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-8l8n8"] Feb 26 11:25:23 crc kubenswrapper[4699]: I0226 11:25:23.454428 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8l8n8" event={"ID":"15312afe-49aa-4681-8513-6ed9c774d222","Type":"ContainerStarted","Data":"437400b19e39ad13f8a5fba459599183e02859395f12ff0d707908885ae8c8bd"} Feb 26 11:25:23 crc kubenswrapper[4699]: I0226 11:25:23.457091 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqmqz" event={"ID":"a69df934-4fa7-472d-abe7-8fa4ec5d4296","Type":"ContainerStarted","Data":"82bb3a908b13a6664db9f5fd16f23580aaf3e16ecceb5cb2a3e2885f89be6580"} Feb 26 11:25:27 crc kubenswrapper[4699]: I0226 11:25:27.685503 4699 generic.go:334] "Generic (PLEG): container finished" podID="a69df934-4fa7-472d-abe7-8fa4ec5d4296" containerID="82bb3a908b13a6664db9f5fd16f23580aaf3e16ecceb5cb2a3e2885f89be6580" exitCode=0 Feb 26 11:25:27 crc kubenswrapper[4699]: I0226 11:25:27.685611 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqmqz" event={"ID":"a69df934-4fa7-472d-abe7-8fa4ec5d4296","Type":"ContainerDied","Data":"82bb3a908b13a6664db9f5fd16f23580aaf3e16ecceb5cb2a3e2885f89be6580"} Feb 26 11:25:32 crc kubenswrapper[4699]: I0226 11:25:32.719794 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqmqz" event={"ID":"a69df934-4fa7-472d-abe7-8fa4ec5d4296","Type":"ContainerStarted","Data":"0cbe859cad3566719b89bc1b0cbfabbe4d6c0bd549cbc5a1536325d87ef1795f"} Feb 26 11:25:32 crc kubenswrapper[4699]: I0226 11:25:32.739314 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wqmqz" podStartSLOduration=2.936186202 podStartE2EDuration="23.73929663s" podCreationTimestamp="2026-02-26 11:25:09 +0000 UTC" firstStartedPulling="2026-02-26 11:25:11.343104725 +0000 UTC m=+857.153931159" lastFinishedPulling="2026-02-26 11:25:32.146215153 +0000 UTC m=+877.957041587" observedRunningTime="2026-02-26 11:25:32.736431237 +0000 UTC m=+878.547257691" watchObservedRunningTime="2026-02-26 11:25:32.73929663 +0000 UTC m=+878.550123064" Feb 26 11:25:33 crc kubenswrapper[4699]: I0226 11:25:33.726845 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8l8n8" event={"ID":"15312afe-49aa-4681-8513-6ed9c774d222","Type":"ContainerStarted","Data":"56452d7d4e4d53e5e0d776eef0ec2c70219b0f7226a62ebb47fc6bdf3d76555a"} Feb 26 11:25:33 crc kubenswrapper[4699]: I0226 11:25:33.753705 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-8l8n8" podStartSLOduration=6.192539795 podStartE2EDuration="16.753688563s" podCreationTimestamp="2026-02-26 11:25:17 +0000 UTC" firstStartedPulling="2026-02-26 11:25:22.587943311 +0000 UTC m=+868.398769745" lastFinishedPulling="2026-02-26 11:25:33.149092079 +0000 UTC m=+878.959918513" observedRunningTime="2026-02-26 11:25:33.75254242 +0000 UTC m=+879.563368874" watchObservedRunningTime="2026-02-26 11:25:33.753688563 +0000 UTC m=+879.564514987" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.767456 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-jnrsc"] Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.768803 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-jnrsc" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.773412 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-qmw66"] Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.774175 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.782352 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-r6rhm" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.782412 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.805886 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-jnrsc"] Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.822233 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-qmw66"] Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.838296 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5jrwg"] Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.839286 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.908053 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d674e733-7357-43e5-be9c-4d4e9bad252c-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-qmw66\" (UID: \"d674e733-7357-43e5-be9c-4d4e9bad252c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.908127 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s95g\" (UniqueName: \"kubernetes.io/projected/c4897df9-3a79-41bf-a7ba-7a72d888f8e1-kube-api-access-4s95g\") pod \"nmstate-metrics-69594cc75-jnrsc\" (UID: \"c4897df9-3a79-41bf-a7ba-7a72d888f8e1\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-jnrsc" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.908169 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b26sz\" (UniqueName: \"kubernetes.io/projected/d674e733-7357-43e5-be9c-4d4e9bad252c-kube-api-access-b26sz\") pod \"nmstate-webhook-786f45cff4-qmw66\" (UID: \"d674e733-7357-43e5-be9c-4d4e9bad252c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.911189 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx"] Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.912052 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.915003 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.915267 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jlnzj" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.917069 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 26 11:25:34 crc kubenswrapper[4699]: I0226 11:25:34.921370 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx"] Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.009846 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lwl\" (UniqueName: \"kubernetes.io/projected/80de38f0-8620-4e27-988e-6d85d7c8bc24-kube-api-access-k9lwl\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.009934 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d674e733-7357-43e5-be9c-4d4e9bad252c-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-qmw66\" (UID: \"d674e733-7357-43e5-be9c-4d4e9bad252c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.009969 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s95g\" (UniqueName: \"kubernetes.io/projected/c4897df9-3a79-41bf-a7ba-7a72d888f8e1-kube-api-access-4s95g\") pod \"nmstate-metrics-69594cc75-jnrsc\" (UID: \"c4897df9-3a79-41bf-a7ba-7a72d888f8e1\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-jnrsc" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.010002 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/80de38f0-8620-4e27-988e-6d85d7c8bc24-ovs-socket\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.010030 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b26sz\" (UniqueName: \"kubernetes.io/projected/d674e733-7357-43e5-be9c-4d4e9bad252c-kube-api-access-b26sz\") pod \"nmstate-webhook-786f45cff4-qmw66\" (UID: \"d674e733-7357-43e5-be9c-4d4e9bad252c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.010054 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/80de38f0-8620-4e27-988e-6d85d7c8bc24-nmstate-lock\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.010083 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/80de38f0-8620-4e27-988e-6d85d7c8bc24-dbus-socket\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.021743 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d674e733-7357-43e5-be9c-4d4e9bad252c-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-qmw66\" (UID: \"d674e733-7357-43e5-be9c-4d4e9bad252c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.026793 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s95g\" (UniqueName: \"kubernetes.io/projected/c4897df9-3a79-41bf-a7ba-7a72d888f8e1-kube-api-access-4s95g\") pod \"nmstate-metrics-69594cc75-jnrsc\" (UID: \"c4897df9-3a79-41bf-a7ba-7a72d888f8e1\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-jnrsc" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.040106 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b26sz\" (UniqueName: \"kubernetes.io/projected/d674e733-7357-43e5-be9c-4d4e9bad252c-kube-api-access-b26sz\") pod \"nmstate-webhook-786f45cff4-qmw66\" (UID: \"d674e733-7357-43e5-be9c-4d4e9bad252c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.103517 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-jnrsc" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.110939 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/13fc1aa0-a043-4b42-952b-7f718ff577d2-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-7f4bx\" (UID: \"13fc1aa0-a043-4b42-952b-7f718ff577d2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.110986 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/80de38f0-8620-4e27-988e-6d85d7c8bc24-ovs-socket\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.111018 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/80de38f0-8620-4e27-988e-6d85d7c8bc24-nmstate-lock\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.111313 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4rgn\" (UniqueName: \"kubernetes.io/projected/13fc1aa0-a043-4b42-952b-7f718ff577d2-kube-api-access-f4rgn\") pod \"nmstate-console-plugin-5dcbbd79cf-7f4bx\" (UID: \"13fc1aa0-a043-4b42-952b-7f718ff577d2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.111320 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/80de38f0-8620-4e27-988e-6d85d7c8bc24-ovs-socket\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.111347 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/80de38f0-8620-4e27-988e-6d85d7c8bc24-dbus-socket\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.111488 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/13fc1aa0-a043-4b42-952b-7f718ff577d2-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-7f4bx\" (UID: \"13fc1aa0-a043-4b42-952b-7f718ff577d2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.111534 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/80de38f0-8620-4e27-988e-6d85d7c8bc24-dbus-socket\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.111540 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/80de38f0-8620-4e27-988e-6d85d7c8bc24-nmstate-lock\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.111610 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lwl\" (UniqueName: \"kubernetes.io/projected/80de38f0-8620-4e27-988e-6d85d7c8bc24-kube-api-access-k9lwl\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.115885 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.123284 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-799ddfb64f-wf4l2"] Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.124229 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.143440 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-799ddfb64f-wf4l2"] Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.177794 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lwl\" (UniqueName: \"kubernetes.io/projected/80de38f0-8620-4e27-988e-6d85d7c8bc24-kube-api-access-k9lwl\") pod \"nmstate-handler-5jrwg\" (UID: \"80de38f0-8620-4e27-988e-6d85d7c8bc24\") " pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.213313 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-oauth-serving-cert\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.213390 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/13fc1aa0-a043-4b42-952b-7f718ff577d2-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-7f4bx\" (UID: \"13fc1aa0-a043-4b42-952b-7f718ff577d2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.213419 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh94\" (UniqueName: \"kubernetes.io/projected/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-kube-api-access-vmh94\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.213463 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-console-oauth-config\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.213488 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4rgn\" (UniqueName: \"kubernetes.io/projected/13fc1aa0-a043-4b42-952b-7f718ff577d2-kube-api-access-f4rgn\") pod \"nmstate-console-plugin-5dcbbd79cf-7f4bx\" (UID: \"13fc1aa0-a043-4b42-952b-7f718ff577d2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.213503 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-trusted-ca-bundle\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.213547 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-service-ca\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.213571 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-console-config\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.213610 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-console-serving-cert\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.213635 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/13fc1aa0-a043-4b42-952b-7f718ff577d2-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-7f4bx\" (UID: \"13fc1aa0-a043-4b42-952b-7f718ff577d2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.214483 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/13fc1aa0-a043-4b42-952b-7f718ff577d2-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-7f4bx\" (UID: \"13fc1aa0-a043-4b42-952b-7f718ff577d2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.216639 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/13fc1aa0-a043-4b42-952b-7f718ff577d2-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-7f4bx\" (UID: \"13fc1aa0-a043-4b42-952b-7f718ff577d2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.231275 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4rgn\" (UniqueName: \"kubernetes.io/projected/13fc1aa0-a043-4b42-952b-7f718ff577d2-kube-api-access-f4rgn\") pod \"nmstate-console-plugin-5dcbbd79cf-7f4bx\" (UID: \"13fc1aa0-a043-4b42-952b-7f718ff577d2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.314505 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-oauth-serving-cert\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.314590 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh94\" (UniqueName: \"kubernetes.io/projected/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-kube-api-access-vmh94\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.314616 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-console-oauth-config\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.318202 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-console-oauth-config\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.318282 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-trusted-ca-bundle\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.318319 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-service-ca\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.318351 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-console-config\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.318375 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-console-serving-cert\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.320983 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-console-serving-cert\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.321637 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-oauth-serving-cert\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.335616 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-service-ca\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.336196 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-console-config\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.338018 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh94\" (UniqueName: \"kubernetes.io/projected/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-kube-api-access-vmh94\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.369696 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1f64b56-52a9-4f58-a20b-04c94c94fb9d-trusted-ca-bundle\") pod \"console-799ddfb64f-wf4l2\" (UID: \"e1f64b56-52a9-4f58-a20b-04c94c94fb9d\") " pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.457237 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:35 crc kubenswrapper[4699]: W0226 11:25:35.484595 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80de38f0_8620_4e27_988e_6d85d7c8bc24.slice/crio-cf3c458fa688f071c79305c0c35e01ea234329467585d8accc935ecd72622e2b WatchSource:0}: Error finding container cf3c458fa688f071c79305c0c35e01ea234329467585d8accc935ecd72622e2b: Status 404 returned error can't find the container with id cf3c458fa688f071c79305c0c35e01ea234329467585d8accc935ecd72622e2b Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.494928 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.527828 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.539293 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-jnrsc"] Feb 26 11:25:35 crc kubenswrapper[4699]: W0226 11:25:35.565277 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4897df9_3a79_41bf_a7ba_7a72d888f8e1.slice/crio-af4684c7fb3b349d3e6c643049f0fbcb46851639637ba37a76dec29f1efb0a24 WatchSource:0}: Error finding container af4684c7fb3b349d3e6c643049f0fbcb46851639637ba37a76dec29f1efb0a24: Status 404 returned error can't find the container with id af4684c7fb3b349d3e6c643049f0fbcb46851639637ba37a76dec29f1efb0a24 Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.739585 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5jrwg" event={"ID":"80de38f0-8620-4e27-988e-6d85d7c8bc24","Type":"ContainerStarted","Data":"cf3c458fa688f071c79305c0c35e01ea234329467585d8accc935ecd72622e2b"} Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.741313 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-jnrsc" event={"ID":"c4897df9-3a79-41bf-a7ba-7a72d888f8e1","Type":"ContainerStarted","Data":"af4684c7fb3b349d3e6c643049f0fbcb46851639637ba37a76dec29f1efb0a24"} Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.802529 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-qmw66"] Feb 26 11:25:35 crc kubenswrapper[4699]: W0226 11:25:35.806620 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd674e733_7357_43e5_be9c_4d4e9bad252c.slice/crio-3f421296c721c33476eb00fa702942c3481d0aebc3247b470d9edcb3c9bc06b0 WatchSource:0}: Error finding container 3f421296c721c33476eb00fa702942c3481d0aebc3247b470d9edcb3c9bc06b0: Status 404 returned error can't find the container with id 3f421296c721c33476eb00fa702942c3481d0aebc3247b470d9edcb3c9bc06b0 Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.939651 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-799ddfb64f-wf4l2"] Feb 26 11:25:35 crc kubenswrapper[4699]: W0226 11:25:35.941499 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1f64b56_52a9_4f58_a20b_04c94c94fb9d.slice/crio-86ffd65f29058f41e7d42a7289882fb46913e5f44d0210f4d4320d8b53187a2d WatchSource:0}: Error finding container 86ffd65f29058f41e7d42a7289882fb46913e5f44d0210f4d4320d8b53187a2d: Status 404 returned error can't find the container with id 86ffd65f29058f41e7d42a7289882fb46913e5f44d0210f4d4320d8b53187a2d Feb 26 11:25:35 crc kubenswrapper[4699]: I0226 11:25:35.967092 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx"] Feb 26 11:25:35 crc kubenswrapper[4699]: W0226 11:25:35.972512 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13fc1aa0_a043_4b42_952b_7f718ff577d2.slice/crio-d60d16c47edb3972def6d97df802c1b1a86c6dbe8cfce544e1fdaf762d875d54 WatchSource:0}: Error finding container d60d16c47edb3972def6d97df802c1b1a86c6dbe8cfce544e1fdaf762d875d54: Status 404 returned error can't find the container with id d60d16c47edb3972def6d97df802c1b1a86c6dbe8cfce544e1fdaf762d875d54 Feb 26 11:25:36 crc kubenswrapper[4699]: I0226 11:25:36.747919 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-799ddfb64f-wf4l2" event={"ID":"e1f64b56-52a9-4f58-a20b-04c94c94fb9d","Type":"ContainerStarted","Data":"65521525e894a180389887166ec1a9561b3180c80b9d12275598c55fbd6ce6cc"} Feb 26 11:25:36 crc kubenswrapper[4699]: I0226 11:25:36.748321 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-799ddfb64f-wf4l2" event={"ID":"e1f64b56-52a9-4f58-a20b-04c94c94fb9d","Type":"ContainerStarted","Data":"86ffd65f29058f41e7d42a7289882fb46913e5f44d0210f4d4320d8b53187a2d"} Feb 26 11:25:36 crc kubenswrapper[4699]: I0226 11:25:36.749883 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" event={"ID":"d674e733-7357-43e5-be9c-4d4e9bad252c","Type":"ContainerStarted","Data":"3f421296c721c33476eb00fa702942c3481d0aebc3247b470d9edcb3c9bc06b0"} Feb 26 11:25:36 crc kubenswrapper[4699]: I0226 11:25:36.750753 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" event={"ID":"13fc1aa0-a043-4b42-952b-7f718ff577d2","Type":"ContainerStarted","Data":"d60d16c47edb3972def6d97df802c1b1a86c6dbe8cfce544e1fdaf762d875d54"} Feb 26 11:25:36 crc kubenswrapper[4699]: I0226 11:25:36.771748 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-799ddfb64f-wf4l2" podStartSLOduration=1.771730948 podStartE2EDuration="1.771730948s" podCreationTimestamp="2026-02-26 11:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:25:36.767579347 +0000 UTC m=+882.578405801" watchObservedRunningTime="2026-02-26 11:25:36.771730948 +0000 UTC m=+882.582557382" Feb 26 11:25:39 crc kubenswrapper[4699]: I0226 11:25:39.772013 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" event={"ID":"13fc1aa0-a043-4b42-952b-7f718ff577d2","Type":"ContainerStarted","Data":"0e03e537a0b0e5d937c5a32b928e5e0bfc6bd5e36979095c545552e26e58e356"} Feb 26 11:25:39 crc kubenswrapper[4699]: I0226 11:25:39.774457 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" event={"ID":"d674e733-7357-43e5-be9c-4d4e9bad252c","Type":"ContainerStarted","Data":"63c66c4576ed4461bd7fbc121a3ae4f04ecb97cee007e9d72a613167741796d6"} Feb 26 11:25:39 crc kubenswrapper[4699]: I0226 11:25:39.775237 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" Feb 26 11:25:39 crc kubenswrapper[4699]: I0226 11:25:39.777384 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-jnrsc" event={"ID":"c4897df9-3a79-41bf-a7ba-7a72d888f8e1","Type":"ContainerStarted","Data":"262c78d22a675e5b5e6f9df75c304d1e26ca3bde89ecdbd804c743e2b4234713"} Feb 26 11:25:39 crc kubenswrapper[4699]: I0226 11:25:39.779101 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5jrwg" event={"ID":"80de38f0-8620-4e27-988e-6d85d7c8bc24","Type":"ContainerStarted","Data":"3b299fc65625c25cc65e2a7b038bb4412403edc1687c45536ba1818e4a3bdeaf"} Feb 26 11:25:39 crc kubenswrapper[4699]: I0226 11:25:39.779261 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:39 crc kubenswrapper[4699]: I0226 11:25:39.792505 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-7f4bx" podStartSLOduration=2.631660578 podStartE2EDuration="5.792487793s" podCreationTimestamp="2026-02-26 11:25:34 +0000 UTC" firstStartedPulling="2026-02-26 11:25:35.974875724 +0000 UTC m=+881.785702158" lastFinishedPulling="2026-02-26 11:25:39.135702939 +0000 UTC m=+884.946529373" observedRunningTime="2026-02-26 11:25:39.787298971 +0000 UTC m=+885.598125426" watchObservedRunningTime="2026-02-26 11:25:39.792487793 +0000 UTC m=+885.603314227" Feb 26 11:25:39 crc kubenswrapper[4699]: I0226 11:25:39.808915 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5jrwg" podStartSLOduration=2.150407497 podStartE2EDuration="5.808894951s" podCreationTimestamp="2026-02-26 11:25:34 +0000 UTC" firstStartedPulling="2026-02-26 11:25:35.486859536 +0000 UTC m=+881.297685970" lastFinishedPulling="2026-02-26 11:25:39.14534699 +0000 UTC m=+884.956173424" observedRunningTime="2026-02-26 11:25:39.80543103 +0000 UTC m=+885.616257464" watchObservedRunningTime="2026-02-26 11:25:39.808894951 +0000 UTC m=+885.619721395" Feb 26 11:25:39 crc kubenswrapper[4699]: I0226 11:25:39.827941 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" podStartSLOduration=2.489405032 podStartE2EDuration="5.827921625s" podCreationTimestamp="2026-02-26 11:25:34 +0000 UTC" firstStartedPulling="2026-02-26 11:25:35.809879976 +0000 UTC m=+881.620706410" lastFinishedPulling="2026-02-26 11:25:39.148396559 +0000 UTC m=+884.959223003" observedRunningTime="2026-02-26 11:25:39.82294612 +0000 UTC m=+885.633772574" watchObservedRunningTime="2026-02-26 11:25:39.827921625 +0000 UTC m=+885.638748059" Feb 26 11:25:40 crc kubenswrapper[4699]: I0226 11:25:40.133060 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:40 crc kubenswrapper[4699]: I0226 11:25:40.133464 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:40 crc kubenswrapper[4699]: I0226 11:25:40.168225 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:40 crc kubenswrapper[4699]: I0226 11:25:40.821592 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wqmqz" Feb 26 11:25:40 crc kubenswrapper[4699]: I0226 11:25:40.885825 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wqmqz"] Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.018326 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xv8lg"] Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.018549 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xv8lg" podUID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerName="registry-server" containerID="cri-o://e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9" gracePeriod=2 Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.392534 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.531131 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-catalog-content\") pod \"e84a1dbc-431c-4897-b5fd-f04460b7f943\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.531199 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-utilities\") pod \"e84a1dbc-431c-4897-b5fd-f04460b7f943\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.531219 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwr8w\" (UniqueName: \"kubernetes.io/projected/e84a1dbc-431c-4897-b5fd-f04460b7f943-kube-api-access-jwr8w\") pod \"e84a1dbc-431c-4897-b5fd-f04460b7f943\" (UID: \"e84a1dbc-431c-4897-b5fd-f04460b7f943\") " Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.532034 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-utilities" (OuterVolumeSpecName: "utilities") pod "e84a1dbc-431c-4897-b5fd-f04460b7f943" (UID: "e84a1dbc-431c-4897-b5fd-f04460b7f943"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.537333 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84a1dbc-431c-4897-b5fd-f04460b7f943-kube-api-access-jwr8w" (OuterVolumeSpecName: "kube-api-access-jwr8w") pod "e84a1dbc-431c-4897-b5fd-f04460b7f943" (UID: "e84a1dbc-431c-4897-b5fd-f04460b7f943"). InnerVolumeSpecName "kube-api-access-jwr8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.633195 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.633231 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwr8w\" (UniqueName: \"kubernetes.io/projected/e84a1dbc-431c-4897-b5fd-f04460b7f943-kube-api-access-jwr8w\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.690451 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e84a1dbc-431c-4897-b5fd-f04460b7f943" (UID: "e84a1dbc-431c-4897-b5fd-f04460b7f943"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.734047 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e84a1dbc-431c-4897-b5fd-f04460b7f943-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.793786 4699 generic.go:334] "Generic (PLEG): container finished" podID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerID="e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9" exitCode=0 Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.793898 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xv8lg" event={"ID":"e84a1dbc-431c-4897-b5fd-f04460b7f943","Type":"ContainerDied","Data":"e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9"} Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.793925 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xv8lg" event={"ID":"e84a1dbc-431c-4897-b5fd-f04460b7f943","Type":"ContainerDied","Data":"6d8c08def942c9655caee92b122902fc51271c1537ca60f7447fb09b383d1bcf"} Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.793941 4699 scope.go:117] "RemoveContainer" containerID="e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9" Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.794339 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xv8lg" Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.826834 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xv8lg"] Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.826892 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xv8lg"] Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.826999 4699 scope.go:117] "RemoveContainer" containerID="b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4" Feb 26 11:25:41 crc kubenswrapper[4699]: I0226 11:25:41.863736 4699 scope.go:117] "RemoveContainer" containerID="388e074c3a65bb78e59f3313093bb2c909eaec385eec81e289a50a066aaab9b5" Feb 26 11:25:42 crc kubenswrapper[4699]: I0226 11:25:42.268709 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84a1dbc-431c-4897-b5fd-f04460b7f943" path="/var/lib/kubelet/pods/e84a1dbc-431c-4897-b5fd-f04460b7f943/volumes" Feb 26 11:25:42 crc kubenswrapper[4699]: I0226 11:25:42.539135 4699 scope.go:117] "RemoveContainer" containerID="e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9" Feb 26 11:25:42 crc kubenswrapper[4699]: E0226 11:25:42.539673 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9\": container with ID starting with e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9 not found: ID does not exist" containerID="e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9" Feb 26 11:25:42 crc kubenswrapper[4699]: I0226 11:25:42.539706 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9"} err="failed to get container status \"e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9\": rpc error: code = NotFound desc = could not find container \"e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9\": container with ID starting with e566d9d06139e29d621cf7bc7b00ad22f80105f4a94a4e3e816956084476cfc9 not found: ID does not exist" Feb 26 11:25:42 crc kubenswrapper[4699]: I0226 11:25:42.539726 4699 scope.go:117] "RemoveContainer" containerID="b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4" Feb 26 11:25:42 crc kubenswrapper[4699]: E0226 11:25:42.539947 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4\": container with ID starting with b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4 not found: ID does not exist" containerID="b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4" Feb 26 11:25:42 crc kubenswrapper[4699]: I0226 11:25:42.539970 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4"} err="failed to get container status \"b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4\": rpc error: code = NotFound desc = could not find container \"b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4\": container with ID starting with b4bfd779c465bd24858dde6afd4dca28e4ac0e16d185e7df709e6584c822fee4 not found: ID does not exist" Feb 26 11:25:42 crc kubenswrapper[4699]: I0226 11:25:42.539983 4699 scope.go:117] "RemoveContainer" containerID="388e074c3a65bb78e59f3313093bb2c909eaec385eec81e289a50a066aaab9b5" Feb 26 11:25:42 crc kubenswrapper[4699]: E0226 11:25:42.540203 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"388e074c3a65bb78e59f3313093bb2c909eaec385eec81e289a50a066aaab9b5\": container with ID starting with 388e074c3a65bb78e59f3313093bb2c909eaec385eec81e289a50a066aaab9b5 not found: ID does not exist" containerID="388e074c3a65bb78e59f3313093bb2c909eaec385eec81e289a50a066aaab9b5" Feb 26 11:25:42 crc kubenswrapper[4699]: I0226 11:25:42.540222 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"388e074c3a65bb78e59f3313093bb2c909eaec385eec81e289a50a066aaab9b5"} err="failed to get container status \"388e074c3a65bb78e59f3313093bb2c909eaec385eec81e289a50a066aaab9b5\": rpc error: code = NotFound desc = could not find container \"388e074c3a65bb78e59f3313093bb2c909eaec385eec81e289a50a066aaab9b5\": container with ID starting with 388e074c3a65bb78e59f3313093bb2c909eaec385eec81e289a50a066aaab9b5 not found: ID does not exist" Feb 26 11:25:43 crc kubenswrapper[4699]: I0226 11:25:43.808127 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-jnrsc" event={"ID":"c4897df9-3a79-41bf-a7ba-7a72d888f8e1","Type":"ContainerStarted","Data":"827f63efb0e8e180c7cb29b8b50b93fc12b981356642e7812eb67717b5870aee"} Feb 26 11:25:43 crc kubenswrapper[4699]: I0226 11:25:43.829457 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-jnrsc" podStartSLOduration=2.5427655160000002 podStartE2EDuration="9.829442422s" podCreationTimestamp="2026-02-26 11:25:34 +0000 UTC" firstStartedPulling="2026-02-26 11:25:35.571481491 +0000 UTC m=+881.382307915" lastFinishedPulling="2026-02-26 11:25:42.858158387 +0000 UTC m=+888.668984821" observedRunningTime="2026-02-26 11:25:43.824966642 +0000 UTC m=+889.635793096" watchObservedRunningTime="2026-02-26 11:25:43.829442422 +0000 UTC m=+889.640268856" Feb 26 11:25:45 crc kubenswrapper[4699]: I0226 11:25:45.481421 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5jrwg" Feb 26 11:25:45 crc kubenswrapper[4699]: I0226 11:25:45.497184 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:45 crc kubenswrapper[4699]: I0226 11:25:45.497229 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:45 crc kubenswrapper[4699]: I0226 11:25:45.504617 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:45 crc kubenswrapper[4699]: I0226 11:25:45.829629 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-799ddfb64f-wf4l2" Feb 26 11:25:45 crc kubenswrapper[4699]: I0226 11:25:45.885415 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hnsh7"] Feb 26 11:25:55 crc kubenswrapper[4699]: I0226 11:25:55.122156 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qmw66" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.131890 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535086-jjp9j"] Feb 26 11:26:00 crc kubenswrapper[4699]: E0226 11:26:00.133617 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerName="registry-server" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.133715 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerName="registry-server" Feb 26 11:26:00 crc kubenswrapper[4699]: E0226 11:26:00.133785 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerName="extract-utilities" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.133925 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerName="extract-utilities" Feb 26 11:26:00 crc kubenswrapper[4699]: E0226 11:26:00.133980 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerName="extract-content" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.134031 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerName="extract-content" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.134196 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84a1dbc-431c-4897-b5fd-f04460b7f943" containerName="registry-server" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.134623 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535086-jjp9j" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.137394 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.137611 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.137662 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.142205 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535086-jjp9j"] Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.185334 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnvxf\" (UniqueName: \"kubernetes.io/projected/fce3efa9-6f6f-4e81-a7a4-6249237a0d61-kube-api-access-tnvxf\") pod \"auto-csr-approver-29535086-jjp9j\" (UID: \"fce3efa9-6f6f-4e81-a7a4-6249237a0d61\") " pod="openshift-infra/auto-csr-approver-29535086-jjp9j" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.286985 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnvxf\" (UniqueName: \"kubernetes.io/projected/fce3efa9-6f6f-4e81-a7a4-6249237a0d61-kube-api-access-tnvxf\") pod \"auto-csr-approver-29535086-jjp9j\" (UID: \"fce3efa9-6f6f-4e81-a7a4-6249237a0d61\") " pod="openshift-infra/auto-csr-approver-29535086-jjp9j" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.305887 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnvxf\" (UniqueName: \"kubernetes.io/projected/fce3efa9-6f6f-4e81-a7a4-6249237a0d61-kube-api-access-tnvxf\") pod \"auto-csr-approver-29535086-jjp9j\" (UID: \"fce3efa9-6f6f-4e81-a7a4-6249237a0d61\") " pod="openshift-infra/auto-csr-approver-29535086-jjp9j" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.472469 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535086-jjp9j" Feb 26 11:26:00 crc kubenswrapper[4699]: I0226 11:26:00.861341 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535086-jjp9j"] Feb 26 11:26:01 crc kubenswrapper[4699]: I0226 11:26:01.919928 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535086-jjp9j" event={"ID":"fce3efa9-6f6f-4e81-a7a4-6249237a0d61","Type":"ContainerStarted","Data":"6a8f188e1e1d79ac2bc4764d8f75fb2c0ee626e5c9dfacdd5ec8bb51719ceaa4"} Feb 26 11:26:02 crc kubenswrapper[4699]: I0226 11:26:02.929957 4699 generic.go:334] "Generic (PLEG): container finished" podID="fce3efa9-6f6f-4e81-a7a4-6249237a0d61" containerID="842f6cf352666ae13feda0b772e0ee74a200121a74a35bd2b4b96deac77bd6aa" exitCode=0 Feb 26 11:26:02 crc kubenswrapper[4699]: I0226 11:26:02.930056 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535086-jjp9j" event={"ID":"fce3efa9-6f6f-4e81-a7a4-6249237a0d61","Type":"ContainerDied","Data":"842f6cf352666ae13feda0b772e0ee74a200121a74a35bd2b4b96deac77bd6aa"} Feb 26 11:26:04 crc kubenswrapper[4699]: I0226 11:26:04.203800 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535086-jjp9j" Feb 26 11:26:04 crc kubenswrapper[4699]: I0226 11:26:04.266834 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnvxf\" (UniqueName: \"kubernetes.io/projected/fce3efa9-6f6f-4e81-a7a4-6249237a0d61-kube-api-access-tnvxf\") pod \"fce3efa9-6f6f-4e81-a7a4-6249237a0d61\" (UID: \"fce3efa9-6f6f-4e81-a7a4-6249237a0d61\") " Feb 26 11:26:04 crc kubenswrapper[4699]: I0226 11:26:04.278716 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce3efa9-6f6f-4e81-a7a4-6249237a0d61-kube-api-access-tnvxf" (OuterVolumeSpecName: "kube-api-access-tnvxf") pod "fce3efa9-6f6f-4e81-a7a4-6249237a0d61" (UID: "fce3efa9-6f6f-4e81-a7a4-6249237a0d61"). InnerVolumeSpecName "kube-api-access-tnvxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:26:04 crc kubenswrapper[4699]: I0226 11:26:04.371228 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnvxf\" (UniqueName: \"kubernetes.io/projected/fce3efa9-6f6f-4e81-a7a4-6249237a0d61-kube-api-access-tnvxf\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:04 crc kubenswrapper[4699]: I0226 11:26:04.945293 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535086-jjp9j" event={"ID":"fce3efa9-6f6f-4e81-a7a4-6249237a0d61","Type":"ContainerDied","Data":"6a8f188e1e1d79ac2bc4764d8f75fb2c0ee626e5c9dfacdd5ec8bb51719ceaa4"} Feb 26 11:26:04 crc kubenswrapper[4699]: I0226 11:26:04.945668 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a8f188e1e1d79ac2bc4764d8f75fb2c0ee626e5c9dfacdd5ec8bb51719ceaa4" Feb 26 11:26:04 crc kubenswrapper[4699]: I0226 11:26:04.945388 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535086-jjp9j" Feb 26 11:26:05 crc kubenswrapper[4699]: I0226 11:26:05.252179 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535080-dcs8z"] Feb 26 11:26:05 crc kubenswrapper[4699]: I0226 11:26:05.256096 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535080-dcs8z"] Feb 26 11:26:06 crc kubenswrapper[4699]: I0226 11:26:06.268893 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ea4516-0708-4b4a-9dd5-75e6220a55d4" path="/var/lib/kubelet/pods/c9ea4516-0708-4b4a-9dd5-75e6220a55d4/volumes" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.521659 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb"] Feb 26 11:26:08 crc kubenswrapper[4699]: E0226 11:26:08.522262 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce3efa9-6f6f-4e81-a7a4-6249237a0d61" containerName="oc" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.522279 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce3efa9-6f6f-4e81-a7a4-6249237a0d61" containerName="oc" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.522401 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce3efa9-6f6f-4e81-a7a4-6249237a0d61" containerName="oc" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.523332 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.525739 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.535777 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb"] Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.625098 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.625190 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zvr\" (UniqueName: \"kubernetes.io/projected/2628fd13-0f89-4bb3-9b76-86a9331a303e-kube-api-access-b6zvr\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.625212 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.727009 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.727099 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zvr\" (UniqueName: \"kubernetes.io/projected/2628fd13-0f89-4bb3-9b76-86a9331a303e-kube-api-access-b6zvr\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.727146 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.727661 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.727869 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.745985 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zvr\" (UniqueName: \"kubernetes.io/projected/2628fd13-0f89-4bb3-9b76-86a9331a303e-kube-api-access-b6zvr\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:08 crc kubenswrapper[4699]: I0226 11:26:08.847995 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:09 crc kubenswrapper[4699]: I0226 11:26:09.030585 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb"] Feb 26 11:26:09 crc kubenswrapper[4699]: I0226 11:26:09.977464 4699 generic.go:334] "Generic (PLEG): container finished" podID="2628fd13-0f89-4bb3-9b76-86a9331a303e" containerID="9a479ac992935636809c4c304863e5e93d2ad4ebac7734e672373d337cb9fb85" exitCode=0 Feb 26 11:26:09 crc kubenswrapper[4699]: I0226 11:26:09.977510 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" event={"ID":"2628fd13-0f89-4bb3-9b76-86a9331a303e","Type":"ContainerDied","Data":"9a479ac992935636809c4c304863e5e93d2ad4ebac7734e672373d337cb9fb85"} Feb 26 11:26:09 crc kubenswrapper[4699]: I0226 11:26:09.977768 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" event={"ID":"2628fd13-0f89-4bb3-9b76-86a9331a303e","Type":"ContainerStarted","Data":"16696c6f79ef696b7807a897e6edabd91a65b5b5a7df8c6396ccce83c630f467"} Feb 26 11:26:10 crc kubenswrapper[4699]: I0226 11:26:10.931515 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-hnsh7" podUID="e6bdcf19-db76-497c-a2fe-a6de38fae724" containerName="console" containerID="cri-o://53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f" gracePeriod=15 Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.248281 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hnsh7_e6bdcf19-db76-497c-a2fe-a6de38fae724/console/0.log" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.248608 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.427623 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-oauth-config\") pod \"e6bdcf19-db76-497c-a2fe-a6de38fae724\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.427710 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-serving-cert\") pod \"e6bdcf19-db76-497c-a2fe-a6de38fae724\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.427732 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-config\") pod \"e6bdcf19-db76-497c-a2fe-a6de38fae724\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.427771 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-service-ca\") pod \"e6bdcf19-db76-497c-a2fe-a6de38fae724\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.427797 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-trusted-ca-bundle\") pod \"e6bdcf19-db76-497c-a2fe-a6de38fae724\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.427903 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvvdh\" (UniqueName: \"kubernetes.io/projected/e6bdcf19-db76-497c-a2fe-a6de38fae724-kube-api-access-wvvdh\") pod \"e6bdcf19-db76-497c-a2fe-a6de38fae724\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.427928 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-oauth-serving-cert\") pod \"e6bdcf19-db76-497c-a2fe-a6de38fae724\" (UID: \"e6bdcf19-db76-497c-a2fe-a6de38fae724\") " Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.429740 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-service-ca" (OuterVolumeSpecName: "service-ca") pod "e6bdcf19-db76-497c-a2fe-a6de38fae724" (UID: "e6bdcf19-db76-497c-a2fe-a6de38fae724"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.429869 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-config" (OuterVolumeSpecName: "console-config") pod "e6bdcf19-db76-497c-a2fe-a6de38fae724" (UID: "e6bdcf19-db76-497c-a2fe-a6de38fae724"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.430139 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e6bdcf19-db76-497c-a2fe-a6de38fae724" (UID: "e6bdcf19-db76-497c-a2fe-a6de38fae724"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.430176 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e6bdcf19-db76-497c-a2fe-a6de38fae724" (UID: "e6bdcf19-db76-497c-a2fe-a6de38fae724"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.437438 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6bdcf19-db76-497c-a2fe-a6de38fae724-kube-api-access-wvvdh" (OuterVolumeSpecName: "kube-api-access-wvvdh") pod "e6bdcf19-db76-497c-a2fe-a6de38fae724" (UID: "e6bdcf19-db76-497c-a2fe-a6de38fae724"). InnerVolumeSpecName "kube-api-access-wvvdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.438406 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e6bdcf19-db76-497c-a2fe-a6de38fae724" (UID: "e6bdcf19-db76-497c-a2fe-a6de38fae724"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.439210 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e6bdcf19-db76-497c-a2fe-a6de38fae724" (UID: "e6bdcf19-db76-497c-a2fe-a6de38fae724"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.530033 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvvdh\" (UniqueName: \"kubernetes.io/projected/e6bdcf19-db76-497c-a2fe-a6de38fae724-kube-api-access-wvvdh\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.530086 4699 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.530107 4699 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.530128 4699 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.530174 4699 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.530190 4699 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.530206 4699 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6bdcf19-db76-497c-a2fe-a6de38fae724-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.989437 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hnsh7_e6bdcf19-db76-497c-a2fe-a6de38fae724/console/0.log" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.989481 4699 generic.go:334] "Generic (PLEG): container finished" podID="e6bdcf19-db76-497c-a2fe-a6de38fae724" containerID="53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f" exitCode=2 Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.989512 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hnsh7" event={"ID":"e6bdcf19-db76-497c-a2fe-a6de38fae724","Type":"ContainerDied","Data":"53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f"} Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.989538 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hnsh7" event={"ID":"e6bdcf19-db76-497c-a2fe-a6de38fae724","Type":"ContainerDied","Data":"70e987324485f04a528051e1c4554d8c5806c907f67af5218c0970ab13cf9e3b"} Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.989553 4699 scope.go:117] "RemoveContainer" containerID="53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f" Feb 26 11:26:11 crc kubenswrapper[4699]: I0226 11:26:11.989681 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hnsh7" Feb 26 11:26:12 crc kubenswrapper[4699]: I0226 11:26:12.034149 4699 scope.go:117] "RemoveContainer" containerID="53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f" Feb 26 11:26:12 crc kubenswrapper[4699]: E0226 11:26:12.034539 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f\": container with ID starting with 53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f not found: ID does not exist" containerID="53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f" Feb 26 11:26:12 crc kubenswrapper[4699]: I0226 11:26:12.034591 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f"} err="failed to get container status \"53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f\": rpc error: code = NotFound desc = could not find container \"53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f\": container with ID starting with 53695fe3f72ab23b0bc81ef3a7cecb32887acb430df553db59397c8ca54a398f not found: ID does not exist" Feb 26 11:26:12 crc kubenswrapper[4699]: I0226 11:26:12.036646 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hnsh7"] Feb 26 11:26:12 crc kubenswrapper[4699]: I0226 11:26:12.040596 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-hnsh7"] Feb 26 11:26:12 crc kubenswrapper[4699]: I0226 11:26:12.280748 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6bdcf19-db76-497c-a2fe-a6de38fae724" path="/var/lib/kubelet/pods/e6bdcf19-db76-497c-a2fe-a6de38fae724/volumes" Feb 26 11:26:13 crc kubenswrapper[4699]: I0226 11:26:13.000450 4699 generic.go:334] "Generic (PLEG): container finished" podID="2628fd13-0f89-4bb3-9b76-86a9331a303e" containerID="39f20a8262954809bb749b951e7954c51b3b01f83d6d9b533079491f91de7f81" exitCode=0 Feb 26 11:26:13 crc kubenswrapper[4699]: I0226 11:26:13.000543 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" event={"ID":"2628fd13-0f89-4bb3-9b76-86a9331a303e","Type":"ContainerDied","Data":"39f20a8262954809bb749b951e7954c51b3b01f83d6d9b533079491f91de7f81"} Feb 26 11:26:14 crc kubenswrapper[4699]: I0226 11:26:14.009508 4699 generic.go:334] "Generic (PLEG): container finished" podID="2628fd13-0f89-4bb3-9b76-86a9331a303e" containerID="071c5a4ed9d914dc4937b803376a521876de0e1bc41ff9eef3bdef617821dfbe" exitCode=0 Feb 26 11:26:14 crc kubenswrapper[4699]: I0226 11:26:14.009736 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" event={"ID":"2628fd13-0f89-4bb3-9b76-86a9331a303e","Type":"ContainerDied","Data":"071c5a4ed9d914dc4937b803376a521876de0e1bc41ff9eef3bdef617821dfbe"} Feb 26 11:26:15 crc kubenswrapper[4699]: I0226 11:26:15.266428 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:15 crc kubenswrapper[4699]: I0226 11:26:15.377643 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-util\") pod \"2628fd13-0f89-4bb3-9b76-86a9331a303e\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " Feb 26 11:26:15 crc kubenswrapper[4699]: I0226 11:26:15.377932 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-bundle\") pod \"2628fd13-0f89-4bb3-9b76-86a9331a303e\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " Feb 26 11:26:15 crc kubenswrapper[4699]: I0226 11:26:15.378028 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6zvr\" (UniqueName: \"kubernetes.io/projected/2628fd13-0f89-4bb3-9b76-86a9331a303e-kube-api-access-b6zvr\") pod \"2628fd13-0f89-4bb3-9b76-86a9331a303e\" (UID: \"2628fd13-0f89-4bb3-9b76-86a9331a303e\") " Feb 26 11:26:15 crc kubenswrapper[4699]: I0226 11:26:15.380209 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-bundle" (OuterVolumeSpecName: "bundle") pod "2628fd13-0f89-4bb3-9b76-86a9331a303e" (UID: "2628fd13-0f89-4bb3-9b76-86a9331a303e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:26:15 crc kubenswrapper[4699]: I0226 11:26:15.384427 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2628fd13-0f89-4bb3-9b76-86a9331a303e-kube-api-access-b6zvr" (OuterVolumeSpecName: "kube-api-access-b6zvr") pod "2628fd13-0f89-4bb3-9b76-86a9331a303e" (UID: "2628fd13-0f89-4bb3-9b76-86a9331a303e"). InnerVolumeSpecName "kube-api-access-b6zvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:26:15 crc kubenswrapper[4699]: I0226 11:26:15.393666 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-util" (OuterVolumeSpecName: "util") pod "2628fd13-0f89-4bb3-9b76-86a9331a303e" (UID: "2628fd13-0f89-4bb3-9b76-86a9331a303e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:26:15 crc kubenswrapper[4699]: I0226 11:26:15.479146 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6zvr\" (UniqueName: \"kubernetes.io/projected/2628fd13-0f89-4bb3-9b76-86a9331a303e-kube-api-access-b6zvr\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:15 crc kubenswrapper[4699]: I0226 11:26:15.479183 4699 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-util\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:15 crc kubenswrapper[4699]: I0226 11:26:15.479194 4699 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2628fd13-0f89-4bb3-9b76-86a9331a303e-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:16 crc kubenswrapper[4699]: I0226 11:26:16.025543 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" event={"ID":"2628fd13-0f89-4bb3-9b76-86a9331a303e","Type":"ContainerDied","Data":"16696c6f79ef696b7807a897e6edabd91a65b5b5a7df8c6396ccce83c630f467"} Feb 26 11:26:16 crc kubenswrapper[4699]: I0226 11:26:16.025601 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16696c6f79ef696b7807a897e6edabd91a65b5b5a7df8c6396ccce83c630f467" Feb 26 11:26:16 crc kubenswrapper[4699]: I0226 11:26:16.025740 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb" Feb 26 11:26:21 crc kubenswrapper[4699]: I0226 11:26:21.013932 4699 scope.go:117] "RemoveContainer" containerID="ec18e4fa3c26a9a3b620eb9c167811e69c8b0db26c298c317aa409e857f17f0c" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.358356 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b"] Feb 26 11:26:25 crc kubenswrapper[4699]: E0226 11:26:25.359105 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2628fd13-0f89-4bb3-9b76-86a9331a303e" containerName="pull" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.359145 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2628fd13-0f89-4bb3-9b76-86a9331a303e" containerName="pull" Feb 26 11:26:25 crc kubenswrapper[4699]: E0226 11:26:25.359164 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2628fd13-0f89-4bb3-9b76-86a9331a303e" containerName="extract" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.359172 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2628fd13-0f89-4bb3-9b76-86a9331a303e" containerName="extract" Feb 26 11:26:25 crc kubenswrapper[4699]: E0226 11:26:25.359184 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bdcf19-db76-497c-a2fe-a6de38fae724" containerName="console" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.359192 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bdcf19-db76-497c-a2fe-a6de38fae724" containerName="console" Feb 26 11:26:25 crc kubenswrapper[4699]: E0226 11:26:25.359204 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2628fd13-0f89-4bb3-9b76-86a9331a303e" containerName="util" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.359209 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2628fd13-0f89-4bb3-9b76-86a9331a303e" containerName="util" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.359321 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6bdcf19-db76-497c-a2fe-a6de38fae724" containerName="console" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.359338 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="2628fd13-0f89-4bb3-9b76-86a9331a303e" containerName="extract" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.359863 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.362325 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.362403 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.362673 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.364510 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.365649 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-x9knq" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.376996 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b"] Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.525703 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7swj\" (UniqueName: \"kubernetes.io/projected/cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8-kube-api-access-s7swj\") pod \"metallb-operator-controller-manager-5d58b8658b-qjr5b\" (UID: \"cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8\") " pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.525763 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8-webhook-cert\") pod \"metallb-operator-controller-manager-5d58b8658b-qjr5b\" (UID: \"cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8\") " pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.525800 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8-apiservice-cert\") pod \"metallb-operator-controller-manager-5d58b8658b-qjr5b\" (UID: \"cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8\") " pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.608160 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh"] Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.609067 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.614679 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.615706 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.615891 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-872t9" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.627164 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8-apiservice-cert\") pod \"metallb-operator-controller-manager-5d58b8658b-qjr5b\" (UID: \"cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8\") " pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.627313 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7swj\" (UniqueName: \"kubernetes.io/projected/cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8-kube-api-access-s7swj\") pod \"metallb-operator-controller-manager-5d58b8658b-qjr5b\" (UID: \"cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8\") " pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.627357 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8-webhook-cert\") pod \"metallb-operator-controller-manager-5d58b8658b-qjr5b\" (UID: \"cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8\") " pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.634807 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8-apiservice-cert\") pod \"metallb-operator-controller-manager-5d58b8658b-qjr5b\" (UID: \"cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8\") " pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.638973 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8-webhook-cert\") pod \"metallb-operator-controller-manager-5d58b8658b-qjr5b\" (UID: \"cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8\") " pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.655490 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7swj\" (UniqueName: \"kubernetes.io/projected/cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8-kube-api-access-s7swj\") pod \"metallb-operator-controller-manager-5d58b8658b-qjr5b\" (UID: \"cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8\") " pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.663223 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh"] Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.676261 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.728241 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af2438c1-8812-4bb1-8999-66cb8d804c05-webhook-cert\") pod \"metallb-operator-webhook-server-6d98597f89-glkjh\" (UID: \"af2438c1-8812-4bb1-8999-66cb8d804c05\") " pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.728323 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cp9w\" (UniqueName: \"kubernetes.io/projected/af2438c1-8812-4bb1-8999-66cb8d804c05-kube-api-access-6cp9w\") pod \"metallb-operator-webhook-server-6d98597f89-glkjh\" (UID: \"af2438c1-8812-4bb1-8999-66cb8d804c05\") " pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.728515 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af2438c1-8812-4bb1-8999-66cb8d804c05-apiservice-cert\") pod \"metallb-operator-webhook-server-6d98597f89-glkjh\" (UID: \"af2438c1-8812-4bb1-8999-66cb8d804c05\") " pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.829843 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cp9w\" (UniqueName: \"kubernetes.io/projected/af2438c1-8812-4bb1-8999-66cb8d804c05-kube-api-access-6cp9w\") pod \"metallb-operator-webhook-server-6d98597f89-glkjh\" (UID: \"af2438c1-8812-4bb1-8999-66cb8d804c05\") " pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.829901 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af2438c1-8812-4bb1-8999-66cb8d804c05-apiservice-cert\") pod \"metallb-operator-webhook-server-6d98597f89-glkjh\" (UID: \"af2438c1-8812-4bb1-8999-66cb8d804c05\") " pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.829949 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af2438c1-8812-4bb1-8999-66cb8d804c05-webhook-cert\") pod \"metallb-operator-webhook-server-6d98597f89-glkjh\" (UID: \"af2438c1-8812-4bb1-8999-66cb8d804c05\") " pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.835802 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af2438c1-8812-4bb1-8999-66cb8d804c05-apiservice-cert\") pod \"metallb-operator-webhook-server-6d98597f89-glkjh\" (UID: \"af2438c1-8812-4bb1-8999-66cb8d804c05\") " pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.835875 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af2438c1-8812-4bb1-8999-66cb8d804c05-webhook-cert\") pod \"metallb-operator-webhook-server-6d98597f89-glkjh\" (UID: \"af2438c1-8812-4bb1-8999-66cb8d804c05\") " pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.861686 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cp9w\" (UniqueName: \"kubernetes.io/projected/af2438c1-8812-4bb1-8999-66cb8d804c05-kube-api-access-6cp9w\") pod \"metallb-operator-webhook-server-6d98597f89-glkjh\" (UID: \"af2438c1-8812-4bb1-8999-66cb8d804c05\") " pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.925576 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b"] Feb 26 11:26:25 crc kubenswrapper[4699]: I0226 11:26:25.925894 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:26 crc kubenswrapper[4699]: I0226 11:26:26.113094 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" event={"ID":"cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8","Type":"ContainerStarted","Data":"126c16a07f27b3ebfb4ef4d7167dfcb10e2537d56e3a6f41235a7c03088b9d52"} Feb 26 11:26:26 crc kubenswrapper[4699]: I0226 11:26:26.355212 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh"] Feb 26 11:26:26 crc kubenswrapper[4699]: W0226 11:26:26.361531 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf2438c1_8812_4bb1_8999_66cb8d804c05.slice/crio-63c962498ad09cd2d7911c43ba1263076a71c6c3d06a5cc940bbd4d151f6d4a2 WatchSource:0}: Error finding container 63c962498ad09cd2d7911c43ba1263076a71c6c3d06a5cc940bbd4d151f6d4a2: Status 404 returned error can't find the container with id 63c962498ad09cd2d7911c43ba1263076a71c6c3d06a5cc940bbd4d151f6d4a2 Feb 26 11:26:27 crc kubenswrapper[4699]: I0226 11:26:27.118802 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" event={"ID":"af2438c1-8812-4bb1-8999-66cb8d804c05","Type":"ContainerStarted","Data":"63c962498ad09cd2d7911c43ba1263076a71c6c3d06a5cc940bbd4d151f6d4a2"} Feb 26 11:26:30 crc kubenswrapper[4699]: I0226 11:26:30.140721 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" event={"ID":"cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8","Type":"ContainerStarted","Data":"5d0d11f0bc581f7731d75f17295799a975a413b459eeb4c8572e36a67d411967"} Feb 26 11:26:30 crc kubenswrapper[4699]: I0226 11:26:30.141349 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:26:30 crc kubenswrapper[4699]: I0226 11:26:30.166041 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" podStartSLOduration=1.662847131 podStartE2EDuration="5.166021485s" podCreationTimestamp="2026-02-26 11:26:25 +0000 UTC" firstStartedPulling="2026-02-26 11:26:25.945359793 +0000 UTC m=+931.756186227" lastFinishedPulling="2026-02-26 11:26:29.448534147 +0000 UTC m=+935.259360581" observedRunningTime="2026-02-26 11:26:30.162989148 +0000 UTC m=+935.973815602" watchObservedRunningTime="2026-02-26 11:26:30.166021485 +0000 UTC m=+935.976847929" Feb 26 11:26:32 crc kubenswrapper[4699]: I0226 11:26:32.156080 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" event={"ID":"af2438c1-8812-4bb1-8999-66cb8d804c05","Type":"ContainerStarted","Data":"3953f87cfe3d6d7ffb9e60f6c4d444487aaef75cc6561dc999ac060b63dfc8b7"} Feb 26 11:26:32 crc kubenswrapper[4699]: I0226 11:26:32.156674 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:26:32 crc kubenswrapper[4699]: I0226 11:26:32.180421 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" podStartSLOduration=2.131936047 podStartE2EDuration="7.180399081s" podCreationTimestamp="2026-02-26 11:26:25 +0000 UTC" firstStartedPulling="2026-02-26 11:26:26.364879028 +0000 UTC m=+932.175705462" lastFinishedPulling="2026-02-26 11:26:31.413342062 +0000 UTC m=+937.224168496" observedRunningTime="2026-02-26 11:26:32.178534978 +0000 UTC m=+937.989361412" watchObservedRunningTime="2026-02-26 11:26:32.180399081 +0000 UTC m=+937.991225525" Feb 26 11:26:45 crc kubenswrapper[4699]: I0226 11:26:45.933475 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6d98597f89-glkjh" Feb 26 11:27:05 crc kubenswrapper[4699]: I0226 11:27:05.678699 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5d58b8658b-qjr5b" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.532952 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-wszs7"] Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.535041 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.541407 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.541472 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.541641 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-n4r2c" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.573434 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb"] Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.574637 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.576916 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.582050 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb"] Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.651439 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-l8phj"] Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.652554 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-l8phj" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.655610 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.655837 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.656024 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8vc6f" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.656184 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.675059 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-bs5nk"] Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.676175 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.677445 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-frr-startup\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.677501 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-frr-conf\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.677529 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-reloader\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.677559 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-metrics-certs\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.677577 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwb7f\" (UniqueName: \"kubernetes.io/projected/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-kube-api-access-jwb7f\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.677604 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-metrics\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.677661 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-frr-sockets\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.678806 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.690713 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-bs5nk"] Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.778946 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlnpn\" (UniqueName: \"kubernetes.io/projected/6ef6a9d7-6997-485a-a812-ded9d3a2df85-kube-api-access-jlnpn\") pod \"controller-86ddb6bd46-bs5nk\" (UID: \"6ef6a9d7-6997-485a-a812-ded9d3a2df85\") " pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.779013 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35357e2c-2a03-46f8-bc28-f7daad3b679d-cert\") pod \"frr-k8s-webhook-server-7f989f654f-svsrb\" (UID: \"35357e2c-2a03-46f8-bc28-f7daad3b679d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.779348 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef6a9d7-6997-485a-a812-ded9d3a2df85-metrics-certs\") pod \"controller-86ddb6bd46-bs5nk\" (UID: \"6ef6a9d7-6997-485a-a812-ded9d3a2df85\") " pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.779392 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-frr-sockets\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.779416 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-frr-startup\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.779700 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-frr-conf\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.779824 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-frr-sockets\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780074 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-frr-conf\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780257 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-reloader\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780301 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-metrics-certs\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780351 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tgrz\" (UniqueName: \"kubernetes.io/projected/d656ca89-f955-44bb-9944-f75bf485a254-kube-api-access-8tgrz\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:06 crc kubenswrapper[4699]: E0226 11:27:06.780476 4699 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780557 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-reloader\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: E0226 11:27:06.780580 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-metrics-certs podName:dfa29d17-a66a-42fe-8275-1526f8fb6dc9 nodeName:}" failed. No retries permitted until 2026-02-26 11:27:07.280520306 +0000 UTC m=+973.091346740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-metrics-certs") pod "frr-k8s-wszs7" (UID: "dfa29d17-a66a-42fe-8275-1526f8fb6dc9") : secret "frr-k8s-certs-secret" not found Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780602 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwb7f\" (UniqueName: \"kubernetes.io/projected/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-kube-api-access-jwb7f\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780633 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-metrics-certs\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780663 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-metrics\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780689 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrr6g\" (UniqueName: \"kubernetes.io/projected/35357e2c-2a03-46f8-bc28-f7daad3b679d-kube-api-access-qrr6g\") pod \"frr-k8s-webhook-server-7f989f654f-svsrb\" (UID: \"35357e2c-2a03-46f8-bc28-f7daad3b679d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780710 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ef6a9d7-6997-485a-a812-ded9d3a2df85-cert\") pod \"controller-86ddb6bd46-bs5nk\" (UID: \"6ef6a9d7-6997-485a-a812-ded9d3a2df85\") " pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780742 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d656ca89-f955-44bb-9944-f75bf485a254-metallb-excludel2\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780787 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-memberlist\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.780947 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-metrics\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.781009 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-frr-startup\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.799066 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwb7f\" (UniqueName: \"kubernetes.io/projected/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-kube-api-access-jwb7f\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.881902 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35357e2c-2a03-46f8-bc28-f7daad3b679d-cert\") pod \"frr-k8s-webhook-server-7f989f654f-svsrb\" (UID: \"35357e2c-2a03-46f8-bc28-f7daad3b679d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.882254 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef6a9d7-6997-485a-a812-ded9d3a2df85-metrics-certs\") pod \"controller-86ddb6bd46-bs5nk\" (UID: \"6ef6a9d7-6997-485a-a812-ded9d3a2df85\") " pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.882478 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tgrz\" (UniqueName: \"kubernetes.io/projected/d656ca89-f955-44bb-9944-f75bf485a254-kube-api-access-8tgrz\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.882595 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-metrics-certs\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.882719 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrr6g\" (UniqueName: \"kubernetes.io/projected/35357e2c-2a03-46f8-bc28-f7daad3b679d-kube-api-access-qrr6g\") pod \"frr-k8s-webhook-server-7f989f654f-svsrb\" (UID: \"35357e2c-2a03-46f8-bc28-f7daad3b679d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.882843 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ef6a9d7-6997-485a-a812-ded9d3a2df85-cert\") pod \"controller-86ddb6bd46-bs5nk\" (UID: \"6ef6a9d7-6997-485a-a812-ded9d3a2df85\") " pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.882968 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d656ca89-f955-44bb-9944-f75bf485a254-metallb-excludel2\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.883135 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-memberlist\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.883259 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlnpn\" (UniqueName: \"kubernetes.io/projected/6ef6a9d7-6997-485a-a812-ded9d3a2df85-kube-api-access-jlnpn\") pod \"controller-86ddb6bd46-bs5nk\" (UID: \"6ef6a9d7-6997-485a-a812-ded9d3a2df85\") " pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:06 crc kubenswrapper[4699]: E0226 11:27:06.882829 4699 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 26 11:27:06 crc kubenswrapper[4699]: E0226 11:27:06.883509 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-metrics-certs podName:d656ca89-f955-44bb-9944-f75bf485a254 nodeName:}" failed. No retries permitted until 2026-02-26 11:27:07.383489487 +0000 UTC m=+973.194315931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-metrics-certs") pod "speaker-l8phj" (UID: "d656ca89-f955-44bb-9944-f75bf485a254") : secret "speaker-certs-secret" not found Feb 26 11:27:06 crc kubenswrapper[4699]: E0226 11:27:06.883259 4699 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 11:27:06 crc kubenswrapper[4699]: E0226 11:27:06.883692 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-memberlist podName:d656ca89-f955-44bb-9944-f75bf485a254 nodeName:}" failed. No retries permitted until 2026-02-26 11:27:07.383667773 +0000 UTC m=+973.194494237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-memberlist") pod "speaker-l8phj" (UID: "d656ca89-f955-44bb-9944-f75bf485a254") : secret "metallb-memberlist" not found Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.883796 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d656ca89-f955-44bb-9944-f75bf485a254-metallb-excludel2\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.887839 4699 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.888219 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef6a9d7-6997-485a-a812-ded9d3a2df85-metrics-certs\") pod \"controller-86ddb6bd46-bs5nk\" (UID: \"6ef6a9d7-6997-485a-a812-ded9d3a2df85\") " pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.888235 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35357e2c-2a03-46f8-bc28-f7daad3b679d-cert\") pod \"frr-k8s-webhook-server-7f989f654f-svsrb\" (UID: \"35357e2c-2a03-46f8-bc28-f7daad3b679d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.898712 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ef6a9d7-6997-485a-a812-ded9d3a2df85-cert\") pod \"controller-86ddb6bd46-bs5nk\" (UID: \"6ef6a9d7-6997-485a-a812-ded9d3a2df85\") " pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.903813 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrr6g\" (UniqueName: \"kubernetes.io/projected/35357e2c-2a03-46f8-bc28-f7daad3b679d-kube-api-access-qrr6g\") pod \"frr-k8s-webhook-server-7f989f654f-svsrb\" (UID: \"35357e2c-2a03-46f8-bc28-f7daad3b679d\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.910391 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tgrz\" (UniqueName: \"kubernetes.io/projected/d656ca89-f955-44bb-9944-f75bf485a254-kube-api-access-8tgrz\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.912882 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlnpn\" (UniqueName: \"kubernetes.io/projected/6ef6a9d7-6997-485a-a812-ded9d3a2df85-kube-api-access-jlnpn\") pod \"controller-86ddb6bd46-bs5nk\" (UID: \"6ef6a9d7-6997-485a-a812-ded9d3a2df85\") " pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:06 crc kubenswrapper[4699]: I0226 11:27:06.996079 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:07 crc kubenswrapper[4699]: I0226 11:27:07.190595 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" Feb 26 11:27:07 crc kubenswrapper[4699]: I0226 11:27:07.291902 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-metrics-certs\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:07 crc kubenswrapper[4699]: I0226 11:27:07.295981 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfa29d17-a66a-42fe-8275-1526f8fb6dc9-metrics-certs\") pod \"frr-k8s-wszs7\" (UID: \"dfa29d17-a66a-42fe-8275-1526f8fb6dc9\") " pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:07 crc kubenswrapper[4699]: I0226 11:27:07.377056 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb"] Feb 26 11:27:07 crc kubenswrapper[4699]: W0226 11:27:07.382547 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35357e2c_2a03_46f8_bc28_f7daad3b679d.slice/crio-36b83af52be953d7bad40bb5c6ad5f1848a66b78add84e2007e4c0e8dcbff369 WatchSource:0}: Error finding container 36b83af52be953d7bad40bb5c6ad5f1848a66b78add84e2007e4c0e8dcbff369: Status 404 returned error can't find the container with id 36b83af52be953d7bad40bb5c6ad5f1848a66b78add84e2007e4c0e8dcbff369 Feb 26 11:27:07 crc kubenswrapper[4699]: I0226 11:27:07.393254 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-metrics-certs\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:07 crc kubenswrapper[4699]: I0226 11:27:07.393342 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-memberlist\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:07 crc kubenswrapper[4699]: E0226 11:27:07.393488 4699 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 11:27:07 crc kubenswrapper[4699]: E0226 11:27:07.393560 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-memberlist podName:d656ca89-f955-44bb-9944-f75bf485a254 nodeName:}" failed. No retries permitted until 2026-02-26 11:27:08.393543049 +0000 UTC m=+974.204369483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-memberlist") pod "speaker-l8phj" (UID: "d656ca89-f955-44bb-9944-f75bf485a254") : secret "metallb-memberlist" not found Feb 26 11:27:07 crc kubenswrapper[4699]: I0226 11:27:07.397948 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-metrics-certs\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:07 crc kubenswrapper[4699]: I0226 11:27:07.451620 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:07 crc kubenswrapper[4699]: I0226 11:27:07.475652 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-bs5nk"] Feb 26 11:27:07 crc kubenswrapper[4699]: I0226 11:27:07.529980 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" event={"ID":"35357e2c-2a03-46f8-bc28-f7daad3b679d","Type":"ContainerStarted","Data":"36b83af52be953d7bad40bb5c6ad5f1848a66b78add84e2007e4c0e8dcbff369"} Feb 26 11:27:07 crc kubenswrapper[4699]: I0226 11:27:07.531348 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-bs5nk" event={"ID":"6ef6a9d7-6997-485a-a812-ded9d3a2df85","Type":"ContainerStarted","Data":"5e8fb4298f3a3a8bd86209e0afd9b52912537cb1c66df9db24fa6b59e4dc1adf"} Feb 26 11:27:08 crc kubenswrapper[4699]: I0226 11:27:08.405306 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-memberlist\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:08 crc kubenswrapper[4699]: I0226 11:27:08.410006 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d656ca89-f955-44bb-9944-f75bf485a254-memberlist\") pod \"speaker-l8phj\" (UID: \"d656ca89-f955-44bb-9944-f75bf485a254\") " pod="metallb-system/speaker-l8phj" Feb 26 11:27:08 crc kubenswrapper[4699]: I0226 11:27:08.469499 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-l8phj" Feb 26 11:27:08 crc kubenswrapper[4699]: W0226 11:27:08.489924 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd656ca89_f955_44bb_9944_f75bf485a254.slice/crio-cccd11e4a1914d85bafde0a0bc8f0324748197e5ab8421f4b32496e0a983bcc1 WatchSource:0}: Error finding container cccd11e4a1914d85bafde0a0bc8f0324748197e5ab8421f4b32496e0a983bcc1: Status 404 returned error can't find the container with id cccd11e4a1914d85bafde0a0bc8f0324748197e5ab8421f4b32496e0a983bcc1 Feb 26 11:27:08 crc kubenswrapper[4699]: I0226 11:27:08.538460 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wszs7" event={"ID":"dfa29d17-a66a-42fe-8275-1526f8fb6dc9","Type":"ContainerStarted","Data":"5ef387f6ad8a8d3b2228d71c65732676cadf36a3a39c10b8cce49edd482081d2"} Feb 26 11:27:08 crc kubenswrapper[4699]: I0226 11:27:08.539290 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l8phj" event={"ID":"d656ca89-f955-44bb-9944-f75bf485a254","Type":"ContainerStarted","Data":"cccd11e4a1914d85bafde0a0bc8f0324748197e5ab8421f4b32496e0a983bcc1"} Feb 26 11:27:08 crc kubenswrapper[4699]: I0226 11:27:08.541185 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-bs5nk" event={"ID":"6ef6a9d7-6997-485a-a812-ded9d3a2df85","Type":"ContainerStarted","Data":"abfb559eaf2df8b227fabdc97cd10e7b3bf75b132967329540d122f07db30d08"} Feb 26 11:27:08 crc kubenswrapper[4699]: I0226 11:27:08.541225 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-bs5nk" event={"ID":"6ef6a9d7-6997-485a-a812-ded9d3a2df85","Type":"ContainerStarted","Data":"8ed671135b943a49812112a63ded16910b3411204ed900f6d6c2e5b474cf1d3c"} Feb 26 11:27:08 crc kubenswrapper[4699]: I0226 11:27:08.541346 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:08 crc kubenswrapper[4699]: I0226 11:27:08.566566 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-bs5nk" podStartSLOduration=2.566549206 podStartE2EDuration="2.566549206s" podCreationTimestamp="2026-02-26 11:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:27:08.563997893 +0000 UTC m=+974.374824347" watchObservedRunningTime="2026-02-26 11:27:08.566549206 +0000 UTC m=+974.377375640" Feb 26 11:27:09 crc kubenswrapper[4699]: I0226 11:27:09.595574 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l8phj" event={"ID":"d656ca89-f955-44bb-9944-f75bf485a254","Type":"ContainerStarted","Data":"681998004cee42d0a153586aabffbdfee4c3452ac997f259a94185dfa7c96b01"} Feb 26 11:27:09 crc kubenswrapper[4699]: I0226 11:27:09.595876 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l8phj" event={"ID":"d656ca89-f955-44bb-9944-f75bf485a254","Type":"ContainerStarted","Data":"0df399b4331ab1b5d9d958dbd2a1435311625e3cb5abd934bab72d2e6c93415d"} Feb 26 11:27:09 crc kubenswrapper[4699]: I0226 11:27:09.595900 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-l8phj" Feb 26 11:27:09 crc kubenswrapper[4699]: I0226 11:27:09.632172 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-l8phj" podStartSLOduration=3.632155353 podStartE2EDuration="3.632155353s" podCreationTimestamp="2026-02-26 11:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:27:09.630408542 +0000 UTC m=+975.441234976" watchObservedRunningTime="2026-02-26 11:27:09.632155353 +0000 UTC m=+975.442981787" Feb 26 11:27:11 crc kubenswrapper[4699]: I0226 11:27:11.585210 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:27:11 crc kubenswrapper[4699]: I0226 11:27:11.585583 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:27:15 crc kubenswrapper[4699]: I0226 11:27:15.635235 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" event={"ID":"35357e2c-2a03-46f8-bc28-f7daad3b679d","Type":"ContainerStarted","Data":"fdbf34372fdba9313eb7c15b9b8f16cb8b02ae8a3989978a732171eb3899389f"} Feb 26 11:27:15 crc kubenswrapper[4699]: I0226 11:27:15.635535 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" Feb 26 11:27:15 crc kubenswrapper[4699]: I0226 11:27:15.636977 4699 generic.go:334] "Generic (PLEG): container finished" podID="dfa29d17-a66a-42fe-8275-1526f8fb6dc9" containerID="261b01236f9fb41efd0623279170275ff90996043ed0669d29633d7b8600b866" exitCode=0 Feb 26 11:27:15 crc kubenswrapper[4699]: I0226 11:27:15.637092 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wszs7" event={"ID":"dfa29d17-a66a-42fe-8275-1526f8fb6dc9","Type":"ContainerDied","Data":"261b01236f9fb41efd0623279170275ff90996043ed0669d29633d7b8600b866"} Feb 26 11:27:15 crc kubenswrapper[4699]: I0226 11:27:15.656769 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" podStartSLOduration=2.3067763 podStartE2EDuration="9.656751009s" podCreationTimestamp="2026-02-26 11:27:06 +0000 UTC" firstStartedPulling="2026-02-26 11:27:07.385045306 +0000 UTC m=+973.195871740" lastFinishedPulling="2026-02-26 11:27:14.735020015 +0000 UTC m=+980.545846449" observedRunningTime="2026-02-26 11:27:15.652373423 +0000 UTC m=+981.463199947" watchObservedRunningTime="2026-02-26 11:27:15.656751009 +0000 UTC m=+981.467577443" Feb 26 11:27:16 crc kubenswrapper[4699]: I0226 11:27:16.647770 4699 generic.go:334] "Generic (PLEG): container finished" podID="dfa29d17-a66a-42fe-8275-1526f8fb6dc9" containerID="dfb9a7698c5d387075f4be606b07d8b47db95cefab1e9938cbf04f21cb0a6158" exitCode=0 Feb 26 11:27:16 crc kubenswrapper[4699]: I0226 11:27:16.647880 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wszs7" event={"ID":"dfa29d17-a66a-42fe-8275-1526f8fb6dc9","Type":"ContainerDied","Data":"dfb9a7698c5d387075f4be606b07d8b47db95cefab1e9938cbf04f21cb0a6158"} Feb 26 11:27:17 crc kubenswrapper[4699]: I0226 11:27:17.656517 4699 generic.go:334] "Generic (PLEG): container finished" podID="dfa29d17-a66a-42fe-8275-1526f8fb6dc9" containerID="daaffb47283e3534600ea7af26f7849913fe543978df0d1cb643ecaa3c98b251" exitCode=0 Feb 26 11:27:17 crc kubenswrapper[4699]: I0226 11:27:17.656584 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wszs7" event={"ID":"dfa29d17-a66a-42fe-8275-1526f8fb6dc9","Type":"ContainerDied","Data":"daaffb47283e3534600ea7af26f7849913fe543978df0d1cb643ecaa3c98b251"} Feb 26 11:27:18 crc kubenswrapper[4699]: I0226 11:27:18.476737 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-l8phj" Feb 26 11:27:18 crc kubenswrapper[4699]: I0226 11:27:18.677185 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wszs7" event={"ID":"dfa29d17-a66a-42fe-8275-1526f8fb6dc9","Type":"ContainerStarted","Data":"2a8083fdf034370ef52d6d7d21d33efd71b3363f5d5e4f79c3c9d5ac7c381aa7"} Feb 26 11:27:18 crc kubenswrapper[4699]: I0226 11:27:18.677236 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wszs7" event={"ID":"dfa29d17-a66a-42fe-8275-1526f8fb6dc9","Type":"ContainerStarted","Data":"992cde06cda987e365a666f1898a916a2d85784dd4cd68e146069de3babd2e61"} Feb 26 11:27:18 crc kubenswrapper[4699]: I0226 11:27:18.677255 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wszs7" event={"ID":"dfa29d17-a66a-42fe-8275-1526f8fb6dc9","Type":"ContainerStarted","Data":"97b0708f178e32a7989588c33b2d46e2468ae3173dc6dc68075301987b71bcba"} Feb 26 11:27:18 crc kubenswrapper[4699]: I0226 11:27:18.677265 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wszs7" event={"ID":"dfa29d17-a66a-42fe-8275-1526f8fb6dc9","Type":"ContainerStarted","Data":"57dffa9df503a62eee0346e241488af2ccf89bcd0fc2e79fd96f293eed9a5ae0"} Feb 26 11:27:19 crc kubenswrapper[4699]: I0226 11:27:19.694193 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wszs7" event={"ID":"dfa29d17-a66a-42fe-8275-1526f8fb6dc9","Type":"ContainerStarted","Data":"0e1f797ed8377e27dc40cb8a80160c0bfd25198075e343c11b66c8182a0955a3"} Feb 26 11:27:19 crc kubenswrapper[4699]: I0226 11:27:19.694656 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wszs7" event={"ID":"dfa29d17-a66a-42fe-8275-1526f8fb6dc9","Type":"ContainerStarted","Data":"33acc66c12b78c941624bbf6dc15f24f63826732cff6426dd644b061a288760b"} Feb 26 11:27:19 crc kubenswrapper[4699]: I0226 11:27:19.694799 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:19 crc kubenswrapper[4699]: I0226 11:27:19.727230 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-wszs7" podStartSLOduration=6.575731016 podStartE2EDuration="13.727197675s" podCreationTimestamp="2026-02-26 11:27:06 +0000 UTC" firstStartedPulling="2026-02-26 11:27:07.591293438 +0000 UTC m=+973.402119872" lastFinishedPulling="2026-02-26 11:27:14.742760097 +0000 UTC m=+980.553586531" observedRunningTime="2026-02-26 11:27:19.719942677 +0000 UTC m=+985.530769131" watchObservedRunningTime="2026-02-26 11:27:19.727197675 +0000 UTC m=+985.538024129" Feb 26 11:27:21 crc kubenswrapper[4699]: I0226 11:27:21.420283 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5ckfn"] Feb 26 11:27:21 crc kubenswrapper[4699]: I0226 11:27:21.420986 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5ckfn" Feb 26 11:27:21 crc kubenswrapper[4699]: I0226 11:27:21.422972 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 26 11:27:21 crc kubenswrapper[4699]: I0226 11:27:21.423027 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gr4l9" Feb 26 11:27:21 crc kubenswrapper[4699]: I0226 11:27:21.423177 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 26 11:27:21 crc kubenswrapper[4699]: I0226 11:27:21.447310 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5ckfn"] Feb 26 11:27:21 crc kubenswrapper[4699]: I0226 11:27:21.498857 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzrhj\" (UniqueName: \"kubernetes.io/projected/46dbbdb1-7181-4c0f-a593-3536bad6290c-kube-api-access-qzrhj\") pod \"openstack-operator-index-5ckfn\" (UID: \"46dbbdb1-7181-4c0f-a593-3536bad6290c\") " pod="openstack-operators/openstack-operator-index-5ckfn" Feb 26 11:27:21 crc kubenswrapper[4699]: I0226 11:27:21.600202 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzrhj\" (UniqueName: \"kubernetes.io/projected/46dbbdb1-7181-4c0f-a593-3536bad6290c-kube-api-access-qzrhj\") pod \"openstack-operator-index-5ckfn\" (UID: \"46dbbdb1-7181-4c0f-a593-3536bad6290c\") " pod="openstack-operators/openstack-operator-index-5ckfn" Feb 26 11:27:21 crc kubenswrapper[4699]: I0226 11:27:21.626328 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzrhj\" (UniqueName: \"kubernetes.io/projected/46dbbdb1-7181-4c0f-a593-3536bad6290c-kube-api-access-qzrhj\") pod \"openstack-operator-index-5ckfn\" (UID: \"46dbbdb1-7181-4c0f-a593-3536bad6290c\") " pod="openstack-operators/openstack-operator-index-5ckfn" Feb 26 11:27:21 crc kubenswrapper[4699]: I0226 11:27:21.741476 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5ckfn" Feb 26 11:27:22 crc kubenswrapper[4699]: I0226 11:27:22.148003 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5ckfn"] Feb 26 11:27:22 crc kubenswrapper[4699]: I0226 11:27:22.452578 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:22 crc kubenswrapper[4699]: I0226 11:27:22.493003 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:22 crc kubenswrapper[4699]: I0226 11:27:22.714210 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5ckfn" event={"ID":"46dbbdb1-7181-4c0f-a593-3536bad6290c","Type":"ContainerStarted","Data":"a0c95212ad762b90262b586cd0a018ca600aeeb8aaf5bb8f18c20bd3e5190bb7"} Feb 26 11:27:24 crc kubenswrapper[4699]: I0226 11:27:24.793270 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5ckfn"] Feb 26 11:27:25 crc kubenswrapper[4699]: I0226 11:27:25.401086 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gmh8j"] Feb 26 11:27:25 crc kubenswrapper[4699]: I0226 11:27:25.401946 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gmh8j" Feb 26 11:27:25 crc kubenswrapper[4699]: I0226 11:27:25.408328 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gmh8j"] Feb 26 11:27:25 crc kubenswrapper[4699]: I0226 11:27:25.551428 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vj7b\" (UniqueName: \"kubernetes.io/projected/22cfe789-87ae-4b23-91c2-cbb5112e4285-kube-api-access-5vj7b\") pod \"openstack-operator-index-gmh8j\" (UID: \"22cfe789-87ae-4b23-91c2-cbb5112e4285\") " pod="openstack-operators/openstack-operator-index-gmh8j" Feb 26 11:27:25 crc kubenswrapper[4699]: I0226 11:27:25.653766 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vj7b\" (UniqueName: \"kubernetes.io/projected/22cfe789-87ae-4b23-91c2-cbb5112e4285-kube-api-access-5vj7b\") pod \"openstack-operator-index-gmh8j\" (UID: \"22cfe789-87ae-4b23-91c2-cbb5112e4285\") " pod="openstack-operators/openstack-operator-index-gmh8j" Feb 26 11:27:25 crc kubenswrapper[4699]: I0226 11:27:25.692365 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vj7b\" (UniqueName: \"kubernetes.io/projected/22cfe789-87ae-4b23-91c2-cbb5112e4285-kube-api-access-5vj7b\") pod \"openstack-operator-index-gmh8j\" (UID: \"22cfe789-87ae-4b23-91c2-cbb5112e4285\") " pod="openstack-operators/openstack-operator-index-gmh8j" Feb 26 11:27:25 crc kubenswrapper[4699]: I0226 11:27:25.724717 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gmh8j" Feb 26 11:27:26 crc kubenswrapper[4699]: I0226 11:27:26.425281 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gmh8j"] Feb 26 11:27:26 crc kubenswrapper[4699]: W0226 11:27:26.427712 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22cfe789_87ae_4b23_91c2_cbb5112e4285.slice/crio-14cad797602d39bf559a3de2c76af4186814e8696471ea7b8a2457177441373a WatchSource:0}: Error finding container 14cad797602d39bf559a3de2c76af4186814e8696471ea7b8a2457177441373a: Status 404 returned error can't find the container with id 14cad797602d39bf559a3de2c76af4186814e8696471ea7b8a2457177441373a Feb 26 11:27:26 crc kubenswrapper[4699]: I0226 11:27:26.738678 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gmh8j" event={"ID":"22cfe789-87ae-4b23-91c2-cbb5112e4285","Type":"ContainerStarted","Data":"65250f92fd240c8235daf4676095c1c679111ac998e059a2d7def87554e89884"} Feb 26 11:27:26 crc kubenswrapper[4699]: I0226 11:27:26.738901 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gmh8j" event={"ID":"22cfe789-87ae-4b23-91c2-cbb5112e4285","Type":"ContainerStarted","Data":"14cad797602d39bf559a3de2c76af4186814e8696471ea7b8a2457177441373a"} Feb 26 11:27:26 crc kubenswrapper[4699]: I0226 11:27:26.740483 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5ckfn" event={"ID":"46dbbdb1-7181-4c0f-a593-3536bad6290c","Type":"ContainerStarted","Data":"df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5"} Feb 26 11:27:26 crc kubenswrapper[4699]: I0226 11:27:26.740565 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-5ckfn" podUID="46dbbdb1-7181-4c0f-a593-3536bad6290c" containerName="registry-server" containerID="cri-o://df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5" gracePeriod=2 Feb 26 11:27:26 crc kubenswrapper[4699]: I0226 11:27:26.756613 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gmh8j" podStartSLOduration=1.70828504 podStartE2EDuration="1.756593525s" podCreationTimestamp="2026-02-26 11:27:25 +0000 UTC" firstStartedPulling="2026-02-26 11:27:26.431151205 +0000 UTC m=+992.241977639" lastFinishedPulling="2026-02-26 11:27:26.47945969 +0000 UTC m=+992.290286124" observedRunningTime="2026-02-26 11:27:26.751939581 +0000 UTC m=+992.562766015" watchObservedRunningTime="2026-02-26 11:27:26.756593525 +0000 UTC m=+992.567419959" Feb 26 11:27:26 crc kubenswrapper[4699]: I0226 11:27:26.771309 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5ckfn" podStartSLOduration=1.876372302 podStartE2EDuration="5.771287596s" podCreationTimestamp="2026-02-26 11:27:21 +0000 UTC" firstStartedPulling="2026-02-26 11:27:22.153795308 +0000 UTC m=+987.964621742" lastFinishedPulling="2026-02-26 11:27:26.048710602 +0000 UTC m=+991.859537036" observedRunningTime="2026-02-26 11:27:26.769732061 +0000 UTC m=+992.580558495" watchObservedRunningTime="2026-02-26 11:27:26.771287596 +0000 UTC m=+992.582114040" Feb 26 11:27:26 crc kubenswrapper[4699]: I0226 11:27:26.999813 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-bs5nk" Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.073164 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5ckfn" Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.173187 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzrhj\" (UniqueName: \"kubernetes.io/projected/46dbbdb1-7181-4c0f-a593-3536bad6290c-kube-api-access-qzrhj\") pod \"46dbbdb1-7181-4c0f-a593-3536bad6290c\" (UID: \"46dbbdb1-7181-4c0f-a593-3536bad6290c\") " Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.181039 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46dbbdb1-7181-4c0f-a593-3536bad6290c-kube-api-access-qzrhj" (OuterVolumeSpecName: "kube-api-access-qzrhj") pod "46dbbdb1-7181-4c0f-a593-3536bad6290c" (UID: "46dbbdb1-7181-4c0f-a593-3536bad6290c"). InnerVolumeSpecName "kube-api-access-qzrhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.195543 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-svsrb" Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.274845 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzrhj\" (UniqueName: \"kubernetes.io/projected/46dbbdb1-7181-4c0f-a593-3536bad6290c-kube-api-access-qzrhj\") on node \"crc\" DevicePath \"\"" Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.454551 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-wszs7" Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.746884 4699 generic.go:334] "Generic (PLEG): container finished" podID="46dbbdb1-7181-4c0f-a593-3536bad6290c" containerID="df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5" exitCode=0 Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.746924 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5ckfn" event={"ID":"46dbbdb1-7181-4c0f-a593-3536bad6290c","Type":"ContainerDied","Data":"df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5"} Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.746956 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5ckfn" Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.746979 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5ckfn" event={"ID":"46dbbdb1-7181-4c0f-a593-3536bad6290c","Type":"ContainerDied","Data":"a0c95212ad762b90262b586cd0a018ca600aeeb8aaf5bb8f18c20bd3e5190bb7"} Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.747005 4699 scope.go:117] "RemoveContainer" containerID="df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5" Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.769182 4699 scope.go:117] "RemoveContainer" containerID="df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5" Feb 26 11:27:27 crc kubenswrapper[4699]: E0226 11:27:27.770164 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5\": container with ID starting with df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5 not found: ID does not exist" containerID="df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5" Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.770199 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5"} err="failed to get container status \"df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5\": rpc error: code = NotFound desc = could not find container \"df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5\": container with ID starting with df14266671fd692c1bfed6867681c81dcfd8f90bac3b5b1e23224ac3948f1dd5 not found: ID does not exist" Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.777072 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5ckfn"] Feb 26 11:27:27 crc kubenswrapper[4699]: I0226 11:27:27.781772 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-5ckfn"] Feb 26 11:27:28 crc kubenswrapper[4699]: I0226 11:27:28.268392 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46dbbdb1-7181-4c0f-a593-3536bad6290c" path="/var/lib/kubelet/pods/46dbbdb1-7181-4c0f-a593-3536bad6290c/volumes" Feb 26 11:27:35 crc kubenswrapper[4699]: I0226 11:27:35.725878 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gmh8j" Feb 26 11:27:35 crc kubenswrapper[4699]: I0226 11:27:35.726450 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gmh8j" Feb 26 11:27:35 crc kubenswrapper[4699]: I0226 11:27:35.750534 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gmh8j" Feb 26 11:27:35 crc kubenswrapper[4699]: I0226 11:27:35.818384 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gmh8j" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.444296 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8"] Feb 26 11:27:37 crc kubenswrapper[4699]: E0226 11:27:37.444829 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dbbdb1-7181-4c0f-a593-3536bad6290c" containerName="registry-server" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.444841 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dbbdb1-7181-4c0f-a593-3536bad6290c" containerName="registry-server" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.444968 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="46dbbdb1-7181-4c0f-a593-3536bad6290c" containerName="registry-server" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.445951 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.448461 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4fpg2" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.460928 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8"] Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.616490 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfp82\" (UniqueName: \"kubernetes.io/projected/449351cd-8256-4e21-b27e-be3c4db11ca5-kube-api-access-qfp82\") pod \"dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.616536 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-util\") pod \"dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.616563 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-bundle\") pod \"dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.717373 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfp82\" (UniqueName: \"kubernetes.io/projected/449351cd-8256-4e21-b27e-be3c4db11ca5-kube-api-access-qfp82\") pod \"dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.717760 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-util\") pod \"dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.717912 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-bundle\") pod \"dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.718359 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-util\") pod \"dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.718389 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-bundle\") pod \"dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.741154 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfp82\" (UniqueName: \"kubernetes.io/projected/449351cd-8256-4e21-b27e-be3c4db11ca5-kube-api-access-qfp82\") pod \"dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:37 crc kubenswrapper[4699]: I0226 11:27:37.766921 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:38 crc kubenswrapper[4699]: I0226 11:27:38.230718 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8"] Feb 26 11:27:38 crc kubenswrapper[4699]: W0226 11:27:38.234357 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod449351cd_8256_4e21_b27e_be3c4db11ca5.slice/crio-59fbb8ef004b1a4cf86a817f17537a83fc7a638063f9abe4dd2c819f23229f7d WatchSource:0}: Error finding container 59fbb8ef004b1a4cf86a817f17537a83fc7a638063f9abe4dd2c819f23229f7d: Status 404 returned error can't find the container with id 59fbb8ef004b1a4cf86a817f17537a83fc7a638063f9abe4dd2c819f23229f7d Feb 26 11:27:38 crc kubenswrapper[4699]: I0226 11:27:38.819652 4699 generic.go:334] "Generic (PLEG): container finished" podID="449351cd-8256-4e21-b27e-be3c4db11ca5" containerID="f2f7f6c58efd9c486cb559854986255a713481e572462d9eed87f2bc7e30f241" exitCode=0 Feb 26 11:27:38 crc kubenswrapper[4699]: I0226 11:27:38.819914 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" event={"ID":"449351cd-8256-4e21-b27e-be3c4db11ca5","Type":"ContainerDied","Data":"f2f7f6c58efd9c486cb559854986255a713481e572462d9eed87f2bc7e30f241"} Feb 26 11:27:38 crc kubenswrapper[4699]: I0226 11:27:38.820396 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" event={"ID":"449351cd-8256-4e21-b27e-be3c4db11ca5","Type":"ContainerStarted","Data":"59fbb8ef004b1a4cf86a817f17537a83fc7a638063f9abe4dd2c819f23229f7d"} Feb 26 11:27:39 crc kubenswrapper[4699]: E0226 11:27:39.395265 4699 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod449351cd_8256_4e21_b27e_be3c4db11ca5.slice/crio-314ddf81598d848087d655652620d779cfd60ffaee228700c37de9468ffba078.scope\": RecentStats: unable to find data in memory cache]" Feb 26 11:27:39 crc kubenswrapper[4699]: I0226 11:27:39.838712 4699 generic.go:334] "Generic (PLEG): container finished" podID="449351cd-8256-4e21-b27e-be3c4db11ca5" containerID="314ddf81598d848087d655652620d779cfd60ffaee228700c37de9468ffba078" exitCode=0 Feb 26 11:27:39 crc kubenswrapper[4699]: I0226 11:27:39.838765 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" event={"ID":"449351cd-8256-4e21-b27e-be3c4db11ca5","Type":"ContainerDied","Data":"314ddf81598d848087d655652620d779cfd60ffaee228700c37de9468ffba078"} Feb 26 11:27:40 crc kubenswrapper[4699]: I0226 11:27:40.851743 4699 generic.go:334] "Generic (PLEG): container finished" podID="449351cd-8256-4e21-b27e-be3c4db11ca5" containerID="5307481d5c588fb811e1d79efc2ddc4f8eef7eda8bb05246530f8cb377179764" exitCode=0 Feb 26 11:27:40 crc kubenswrapper[4699]: I0226 11:27:40.852183 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" event={"ID":"449351cd-8256-4e21-b27e-be3c4db11ca5","Type":"ContainerDied","Data":"5307481d5c588fb811e1d79efc2ddc4f8eef7eda8bb05246530f8cb377179764"} Feb 26 11:27:41 crc kubenswrapper[4699]: I0226 11:27:41.585091 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:27:41 crc kubenswrapper[4699]: I0226 11:27:41.585189 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.114033 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.282171 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfp82\" (UniqueName: \"kubernetes.io/projected/449351cd-8256-4e21-b27e-be3c4db11ca5-kube-api-access-qfp82\") pod \"449351cd-8256-4e21-b27e-be3c4db11ca5\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.282281 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-util\") pod \"449351cd-8256-4e21-b27e-be3c4db11ca5\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.282333 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-bundle\") pod \"449351cd-8256-4e21-b27e-be3c4db11ca5\" (UID: \"449351cd-8256-4e21-b27e-be3c4db11ca5\") " Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.283040 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-bundle" (OuterVolumeSpecName: "bundle") pod "449351cd-8256-4e21-b27e-be3c4db11ca5" (UID: "449351cd-8256-4e21-b27e-be3c4db11ca5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.290016 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/449351cd-8256-4e21-b27e-be3c4db11ca5-kube-api-access-qfp82" (OuterVolumeSpecName: "kube-api-access-qfp82") pod "449351cd-8256-4e21-b27e-be3c4db11ca5" (UID: "449351cd-8256-4e21-b27e-be3c4db11ca5"). InnerVolumeSpecName "kube-api-access-qfp82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.297313 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-util" (OuterVolumeSpecName: "util") pod "449351cd-8256-4e21-b27e-be3c4db11ca5" (UID: "449351cd-8256-4e21-b27e-be3c4db11ca5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.383711 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfp82\" (UniqueName: \"kubernetes.io/projected/449351cd-8256-4e21-b27e-be3c4db11ca5-kube-api-access-qfp82\") on node \"crc\" DevicePath \"\"" Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.383749 4699 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-util\") on node \"crc\" DevicePath \"\"" Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.383767 4699 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/449351cd-8256-4e21-b27e-be3c4db11ca5-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.881988 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" event={"ID":"449351cd-8256-4e21-b27e-be3c4db11ca5","Type":"ContainerDied","Data":"59fbb8ef004b1a4cf86a817f17537a83fc7a638063f9abe4dd2c819f23229f7d"} Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.882066 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59fbb8ef004b1a4cf86a817f17537a83fc7a638063f9abe4dd2c819f23229f7d" Feb 26 11:27:42 crc kubenswrapper[4699]: I0226 11:27:42.882101 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8" Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.622154 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd"] Feb 26 11:27:49 crc kubenswrapper[4699]: E0226 11:27:49.623021 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449351cd-8256-4e21-b27e-be3c4db11ca5" containerName="extract" Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.623037 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="449351cd-8256-4e21-b27e-be3c4db11ca5" containerName="extract" Feb 26 11:27:49 crc kubenswrapper[4699]: E0226 11:27:49.623062 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449351cd-8256-4e21-b27e-be3c4db11ca5" containerName="util" Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.623069 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="449351cd-8256-4e21-b27e-be3c4db11ca5" containerName="util" Feb 26 11:27:49 crc kubenswrapper[4699]: E0226 11:27:49.623085 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449351cd-8256-4e21-b27e-be3c4db11ca5" containerName="pull" Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.623093 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="449351cd-8256-4e21-b27e-be3c4db11ca5" containerName="pull" Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.623253 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="449351cd-8256-4e21-b27e-be3c4db11ca5" containerName="extract" Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.623778 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd" Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.626429 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-b88ct" Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.647320 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd"] Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.785100 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbbhn\" (UniqueName: \"kubernetes.io/projected/3a6d1210-ece5-4666-80bf-c7c7821e441c-kube-api-access-hbbhn\") pod \"openstack-operator-controller-init-7c5cc54f9c-wjrrd\" (UID: \"3a6d1210-ece5-4666-80bf-c7c7821e441c\") " pod="openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd" Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.886277 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbbhn\" (UniqueName: \"kubernetes.io/projected/3a6d1210-ece5-4666-80bf-c7c7821e441c-kube-api-access-hbbhn\") pod \"openstack-operator-controller-init-7c5cc54f9c-wjrrd\" (UID: \"3a6d1210-ece5-4666-80bf-c7c7821e441c\") " pod="openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd" Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.904799 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbbhn\" (UniqueName: \"kubernetes.io/projected/3a6d1210-ece5-4666-80bf-c7c7821e441c-kube-api-access-hbbhn\") pod \"openstack-operator-controller-init-7c5cc54f9c-wjrrd\" (UID: \"3a6d1210-ece5-4666-80bf-c7c7821e441c\") " pod="openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd" Feb 26 11:27:49 crc kubenswrapper[4699]: I0226 11:27:49.945995 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd" Feb 26 11:27:50 crc kubenswrapper[4699]: I0226 11:27:50.375330 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd"] Feb 26 11:27:50 crc kubenswrapper[4699]: I0226 11:27:50.926184 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd" event={"ID":"3a6d1210-ece5-4666-80bf-c7c7821e441c","Type":"ContainerStarted","Data":"14f18c43e995834048418451242f186b58d9e588a4009246b7c26d94d2fc0672"} Feb 26 11:27:55 crc kubenswrapper[4699]: I0226 11:27:55.955998 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd" event={"ID":"3a6d1210-ece5-4666-80bf-c7c7821e441c","Type":"ContainerStarted","Data":"e9a618a01d013f465ebb83060bd50d4d1a228b78a25c5b2f5d6a002e0b768a20"} Feb 26 11:27:55 crc kubenswrapper[4699]: I0226 11:27:55.956593 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd" Feb 26 11:27:55 crc kubenswrapper[4699]: I0226 11:27:55.983692 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd" podStartSLOduration=2.325617336 podStartE2EDuration="6.983674208s" podCreationTimestamp="2026-02-26 11:27:49 +0000 UTC" firstStartedPulling="2026-02-26 11:27:50.382244324 +0000 UTC m=+1016.193070758" lastFinishedPulling="2026-02-26 11:27:55.040301186 +0000 UTC m=+1020.851127630" observedRunningTime="2026-02-26 11:27:55.980636141 +0000 UTC m=+1021.791462595" watchObservedRunningTime="2026-02-26 11:27:55.983674208 +0000 UTC m=+1021.794500652" Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.699415 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v2mgq"] Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.700917 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.718570 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2mgq"] Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.800331 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-catalog-content\") pod \"community-operators-v2mgq\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.800403 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq6hk\" (UniqueName: \"kubernetes.io/projected/a164bc1f-d5f1-4538-86c7-98edbe73d0af-kube-api-access-jq6hk\") pod \"community-operators-v2mgq\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.800452 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-utilities\") pod \"community-operators-v2mgq\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.901819 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq6hk\" (UniqueName: \"kubernetes.io/projected/a164bc1f-d5f1-4538-86c7-98edbe73d0af-kube-api-access-jq6hk\") pod \"community-operators-v2mgq\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.902334 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-utilities\") pod \"community-operators-v2mgq\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.902411 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-catalog-content\") pod \"community-operators-v2mgq\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.903089 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-utilities\") pod \"community-operators-v2mgq\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.903135 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-catalog-content\") pod \"community-operators-v2mgq\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:27:57 crc kubenswrapper[4699]: I0226 11:27:57.921840 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq6hk\" (UniqueName: \"kubernetes.io/projected/a164bc1f-d5f1-4538-86c7-98edbe73d0af-kube-api-access-jq6hk\") pod \"community-operators-v2mgq\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:27:58 crc kubenswrapper[4699]: I0226 11:27:58.017746 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:27:58 crc kubenswrapper[4699]: I0226 11:27:58.338210 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2mgq"] Feb 26 11:27:58 crc kubenswrapper[4699]: I0226 11:27:58.976546 4699 generic.go:334] "Generic (PLEG): container finished" podID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" containerID="d21c4a1c2d1695fcffcda48cd540319b245b3271d7e80b7aed7da48b650046d0" exitCode=0 Feb 26 11:27:58 crc kubenswrapper[4699]: I0226 11:27:58.976587 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2mgq" event={"ID":"a164bc1f-d5f1-4538-86c7-98edbe73d0af","Type":"ContainerDied","Data":"d21c4a1c2d1695fcffcda48cd540319b245b3271d7e80b7aed7da48b650046d0"} Feb 26 11:27:58 crc kubenswrapper[4699]: I0226 11:27:58.976612 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2mgq" event={"ID":"a164bc1f-d5f1-4538-86c7-98edbe73d0af","Type":"ContainerStarted","Data":"753782b8fb80b4cc413697d69c7fb60630bc30f6fc60cfabadcf0bf9e2078d1c"} Feb 26 11:27:59 crc kubenswrapper[4699]: I0226 11:27:59.983752 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2mgq" event={"ID":"a164bc1f-d5f1-4538-86c7-98edbe73d0af","Type":"ContainerStarted","Data":"9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4"} Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.134103 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535088-rwpx5"] Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.136641 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535088-rwpx5" Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.138835 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.139378 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.142108 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.157713 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535088-rwpx5"] Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.238477 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdbpt\" (UniqueName: \"kubernetes.io/projected/d05b2b3d-2906-4acc-aaa2-2f2674e46f27-kube-api-access-hdbpt\") pod \"auto-csr-approver-29535088-rwpx5\" (UID: \"d05b2b3d-2906-4acc-aaa2-2f2674e46f27\") " pod="openshift-infra/auto-csr-approver-29535088-rwpx5" Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.340211 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdbpt\" (UniqueName: \"kubernetes.io/projected/d05b2b3d-2906-4acc-aaa2-2f2674e46f27-kube-api-access-hdbpt\") pod \"auto-csr-approver-29535088-rwpx5\" (UID: \"d05b2b3d-2906-4acc-aaa2-2f2674e46f27\") " pod="openshift-infra/auto-csr-approver-29535088-rwpx5" Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.361975 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdbpt\" (UniqueName: \"kubernetes.io/projected/d05b2b3d-2906-4acc-aaa2-2f2674e46f27-kube-api-access-hdbpt\") pod \"auto-csr-approver-29535088-rwpx5\" (UID: \"d05b2b3d-2906-4acc-aaa2-2f2674e46f27\") " pod="openshift-infra/auto-csr-approver-29535088-rwpx5" Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.469467 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535088-rwpx5" Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.637021 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535088-rwpx5"] Feb 26 11:28:00 crc kubenswrapper[4699]: W0226 11:28:00.645029 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd05b2b3d_2906_4acc_aaa2_2f2674e46f27.slice/crio-0dac7df2c4897357eb059b21b5fc653d4db4b06327bc9ee7f07988361f4b297c WatchSource:0}: Error finding container 0dac7df2c4897357eb059b21b5fc653d4db4b06327bc9ee7f07988361f4b297c: Status 404 returned error can't find the container with id 0dac7df2c4897357eb059b21b5fc653d4db4b06327bc9ee7f07988361f4b297c Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.991604 4699 generic.go:334] "Generic (PLEG): container finished" podID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" containerID="9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4" exitCode=0 Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.991676 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2mgq" event={"ID":"a164bc1f-d5f1-4538-86c7-98edbe73d0af","Type":"ContainerDied","Data":"9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4"} Feb 26 11:28:00 crc kubenswrapper[4699]: I0226 11:28:00.993047 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535088-rwpx5" event={"ID":"d05b2b3d-2906-4acc-aaa2-2f2674e46f27","Type":"ContainerStarted","Data":"0dac7df2c4897357eb059b21b5fc653d4db4b06327bc9ee7f07988361f4b297c"} Feb 26 11:28:02 crc kubenswrapper[4699]: I0226 11:28:02.001861 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2mgq" event={"ID":"a164bc1f-d5f1-4538-86c7-98edbe73d0af","Type":"ContainerStarted","Data":"d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c"} Feb 26 11:28:02 crc kubenswrapper[4699]: I0226 11:28:02.024389 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v2mgq" podStartSLOduration=2.609304583 podStartE2EDuration="5.024373575s" podCreationTimestamp="2026-02-26 11:27:57 +0000 UTC" firstStartedPulling="2026-02-26 11:27:58.977741598 +0000 UTC m=+1024.788568032" lastFinishedPulling="2026-02-26 11:28:01.39281059 +0000 UTC m=+1027.203637024" observedRunningTime="2026-02-26 11:28:02.019551597 +0000 UTC m=+1027.830378031" watchObservedRunningTime="2026-02-26 11:28:02.024373575 +0000 UTC m=+1027.835200009" Feb 26 11:28:03 crc kubenswrapper[4699]: I0226 11:28:03.009448 4699 generic.go:334] "Generic (PLEG): container finished" podID="d05b2b3d-2906-4acc-aaa2-2f2674e46f27" containerID="1a0ef1ef6d99c76627fc03dba6d4f740ea96e617f11be2b18231f70b40dd8703" exitCode=0 Feb 26 11:28:03 crc kubenswrapper[4699]: I0226 11:28:03.009573 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535088-rwpx5" event={"ID":"d05b2b3d-2906-4acc-aaa2-2f2674e46f27","Type":"ContainerDied","Data":"1a0ef1ef6d99c76627fc03dba6d4f740ea96e617f11be2b18231f70b40dd8703"} Feb 26 11:28:04 crc kubenswrapper[4699]: I0226 11:28:04.244474 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535088-rwpx5" Feb 26 11:28:04 crc kubenswrapper[4699]: I0226 11:28:04.389069 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdbpt\" (UniqueName: \"kubernetes.io/projected/d05b2b3d-2906-4acc-aaa2-2f2674e46f27-kube-api-access-hdbpt\") pod \"d05b2b3d-2906-4acc-aaa2-2f2674e46f27\" (UID: \"d05b2b3d-2906-4acc-aaa2-2f2674e46f27\") " Feb 26 11:28:04 crc kubenswrapper[4699]: I0226 11:28:04.394343 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d05b2b3d-2906-4acc-aaa2-2f2674e46f27-kube-api-access-hdbpt" (OuterVolumeSpecName: "kube-api-access-hdbpt") pod "d05b2b3d-2906-4acc-aaa2-2f2674e46f27" (UID: "d05b2b3d-2906-4acc-aaa2-2f2674e46f27"). InnerVolumeSpecName "kube-api-access-hdbpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:28:04 crc kubenswrapper[4699]: I0226 11:28:04.490252 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdbpt\" (UniqueName: \"kubernetes.io/projected/d05b2b3d-2906-4acc-aaa2-2f2674e46f27-kube-api-access-hdbpt\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:05 crc kubenswrapper[4699]: I0226 11:28:05.030916 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535088-rwpx5" event={"ID":"d05b2b3d-2906-4acc-aaa2-2f2674e46f27","Type":"ContainerDied","Data":"0dac7df2c4897357eb059b21b5fc653d4db4b06327bc9ee7f07988361f4b297c"} Feb 26 11:28:05 crc kubenswrapper[4699]: I0226 11:28:05.031344 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dac7df2c4897357eb059b21b5fc653d4db4b06327bc9ee7f07988361f4b297c" Feb 26 11:28:05 crc kubenswrapper[4699]: I0226 11:28:05.031025 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535088-rwpx5" Feb 26 11:28:05 crc kubenswrapper[4699]: I0226 11:28:05.288389 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535082-2l88q"] Feb 26 11:28:05 crc kubenswrapper[4699]: I0226 11:28:05.292714 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535082-2l88q"] Feb 26 11:28:06 crc kubenswrapper[4699]: I0226 11:28:06.269301 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96109ee-edc2-496a-b6bc-cffad5fb9a40" path="/var/lib/kubelet/pods/b96109ee-edc2-496a-b6bc-cffad5fb9a40/volumes" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.695207 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s5dgj"] Feb 26 11:28:07 crc kubenswrapper[4699]: E0226 11:28:07.695697 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05b2b3d-2906-4acc-aaa2-2f2674e46f27" containerName="oc" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.695708 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05b2b3d-2906-4acc-aaa2-2f2674e46f27" containerName="oc" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.695810 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="d05b2b3d-2906-4acc-aaa2-2f2674e46f27" containerName="oc" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.696648 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.716502 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5dgj"] Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.832610 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-catalog-content\") pod \"certified-operators-s5dgj\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.832686 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-utilities\") pod \"certified-operators-s5dgj\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.832712 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t2hz\" (UniqueName: \"kubernetes.io/projected/f9301b29-bf1c-45bc-9192-d9513b5b0726-kube-api-access-4t2hz\") pod \"certified-operators-s5dgj\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.933504 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-utilities\") pod \"certified-operators-s5dgj\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.933571 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t2hz\" (UniqueName: \"kubernetes.io/projected/f9301b29-bf1c-45bc-9192-d9513b5b0726-kube-api-access-4t2hz\") pod \"certified-operators-s5dgj\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.933653 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-catalog-content\") pod \"certified-operators-s5dgj\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.934049 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-utilities\") pod \"certified-operators-s5dgj\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.934097 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-catalog-content\") pod \"certified-operators-s5dgj\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:07 crc kubenswrapper[4699]: I0226 11:28:07.955397 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t2hz\" (UniqueName: \"kubernetes.io/projected/f9301b29-bf1c-45bc-9192-d9513b5b0726-kube-api-access-4t2hz\") pod \"certified-operators-s5dgj\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:08 crc kubenswrapper[4699]: I0226 11:28:08.013877 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:08 crc kubenswrapper[4699]: I0226 11:28:08.018024 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:28:08 crc kubenswrapper[4699]: I0226 11:28:08.018333 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:28:08 crc kubenswrapper[4699]: I0226 11:28:08.066874 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:28:08 crc kubenswrapper[4699]: I0226 11:28:08.842297 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5dgj"] Feb 26 11:28:08 crc kubenswrapper[4699]: W0226 11:28:08.851732 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9301b29_bf1c_45bc_9192_d9513b5b0726.slice/crio-89b87751f0ad4bddeec5fe0cb86b72e858b80a42b2279b3ac84b208c0e16ab83 WatchSource:0}: Error finding container 89b87751f0ad4bddeec5fe0cb86b72e858b80a42b2279b3ac84b208c0e16ab83: Status 404 returned error can't find the container with id 89b87751f0ad4bddeec5fe0cb86b72e858b80a42b2279b3ac84b208c0e16ab83 Feb 26 11:28:09 crc kubenswrapper[4699]: I0226 11:28:09.057342 4699 generic.go:334] "Generic (PLEG): container finished" podID="f9301b29-bf1c-45bc-9192-d9513b5b0726" containerID="7960725ec9599c885caf90f1afa64d37fa9657019482f56a5ed54538ab6dd21e" exitCode=0 Feb 26 11:28:09 crc kubenswrapper[4699]: I0226 11:28:09.059167 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5dgj" event={"ID":"f9301b29-bf1c-45bc-9192-d9513b5b0726","Type":"ContainerDied","Data":"7960725ec9599c885caf90f1afa64d37fa9657019482f56a5ed54538ab6dd21e"} Feb 26 11:28:09 crc kubenswrapper[4699]: I0226 11:28:09.059205 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5dgj" event={"ID":"f9301b29-bf1c-45bc-9192-d9513b5b0726","Type":"ContainerStarted","Data":"89b87751f0ad4bddeec5fe0cb86b72e858b80a42b2279b3ac84b208c0e16ab83"} Feb 26 11:28:09 crc kubenswrapper[4699]: I0226 11:28:09.099946 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:28:09 crc kubenswrapper[4699]: I0226 11:28:09.961058 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7c5cc54f9c-wjrrd" Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.075160 4699 generic.go:334] "Generic (PLEG): container finished" podID="f9301b29-bf1c-45bc-9192-d9513b5b0726" containerID="eb6628a2b056813a322b32dff4bfd3967ebe2e37d7c7dfee12782f9f18908173" exitCode=0 Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.075217 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5dgj" event={"ID":"f9301b29-bf1c-45bc-9192-d9513b5b0726","Type":"ContainerDied","Data":"eb6628a2b056813a322b32dff4bfd3967ebe2e37d7c7dfee12782f9f18908173"} Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.489559 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2mgq"] Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.489771 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v2mgq" podUID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" containerName="registry-server" containerID="cri-o://d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c" gracePeriod=2 Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.585508 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.585762 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.585833 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.597854 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"119837a96f7eb017f5f7e56268e9cf0e4a17276f8f8dd21ae8a57f4864ea4cf7"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.598107 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://119837a96f7eb017f5f7e56268e9cf0e4a17276f8f8dd21ae8a57f4864ea4cf7" gracePeriod=600 Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.904604 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.987858 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq6hk\" (UniqueName: \"kubernetes.io/projected/a164bc1f-d5f1-4538-86c7-98edbe73d0af-kube-api-access-jq6hk\") pod \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.987904 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-catalog-content\") pod \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.987927 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-utilities\") pod \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\" (UID: \"a164bc1f-d5f1-4538-86c7-98edbe73d0af\") " Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.988900 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-utilities" (OuterVolumeSpecName: "utilities") pod "a164bc1f-d5f1-4538-86c7-98edbe73d0af" (UID: "a164bc1f-d5f1-4538-86c7-98edbe73d0af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:28:11 crc kubenswrapper[4699]: I0226 11:28:11.992809 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a164bc1f-d5f1-4538-86c7-98edbe73d0af-kube-api-access-jq6hk" (OuterVolumeSpecName: "kube-api-access-jq6hk") pod "a164bc1f-d5f1-4538-86c7-98edbe73d0af" (UID: "a164bc1f-d5f1-4538-86c7-98edbe73d0af"). InnerVolumeSpecName "kube-api-access-jq6hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.043364 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a164bc1f-d5f1-4538-86c7-98edbe73d0af" (UID: "a164bc1f-d5f1-4538-86c7-98edbe73d0af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.084432 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="119837a96f7eb017f5f7e56268e9cf0e4a17276f8f8dd21ae8a57f4864ea4cf7" exitCode=0 Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.084505 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"119837a96f7eb017f5f7e56268e9cf0e4a17276f8f8dd21ae8a57f4864ea4cf7"} Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.084556 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"2c2d25c558a927e58d9962b6f55de97dac3f222cb5bc89a35791fca832759b03"} Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.084573 4699 scope.go:117] "RemoveContainer" containerID="bb4262ffa74d3c4cd8ca9d3a4ee81267fb75459ec0f5e9e96d1dd3934b8627ca" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.090587 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq6hk\" (UniqueName: \"kubernetes.io/projected/a164bc1f-d5f1-4538-86c7-98edbe73d0af-kube-api-access-jq6hk\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.090621 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.090913 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a164bc1f-d5f1-4538-86c7-98edbe73d0af-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.094485 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5dgj" event={"ID":"f9301b29-bf1c-45bc-9192-d9513b5b0726","Type":"ContainerStarted","Data":"3bbe57bc2e73648e372826fca5459292ff76808cee8395184b212e51692ed135"} Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.097968 4699 generic.go:334] "Generic (PLEG): container finished" podID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" containerID="d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c" exitCode=0 Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.098007 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2mgq" event={"ID":"a164bc1f-d5f1-4538-86c7-98edbe73d0af","Type":"ContainerDied","Data":"d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c"} Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.098027 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2mgq" event={"ID":"a164bc1f-d5f1-4538-86c7-98edbe73d0af","Type":"ContainerDied","Data":"753782b8fb80b4cc413697d69c7fb60630bc30f6fc60cfabadcf0bf9e2078d1c"} Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.098067 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2mgq" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.127916 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s5dgj" podStartSLOduration=2.655592584 podStartE2EDuration="5.127893555s" podCreationTimestamp="2026-02-26 11:28:07 +0000 UTC" firstStartedPulling="2026-02-26 11:28:09.060013047 +0000 UTC m=+1034.870839481" lastFinishedPulling="2026-02-26 11:28:11.532314018 +0000 UTC m=+1037.343140452" observedRunningTime="2026-02-26 11:28:12.121472453 +0000 UTC m=+1037.932298897" watchObservedRunningTime="2026-02-26 11:28:12.127893555 +0000 UTC m=+1037.938719989" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.128982 4699 scope.go:117] "RemoveContainer" containerID="d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.145291 4699 scope.go:117] "RemoveContainer" containerID="9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.146716 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2mgq"] Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.153639 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v2mgq"] Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.161521 4699 scope.go:117] "RemoveContainer" containerID="d21c4a1c2d1695fcffcda48cd540319b245b3271d7e80b7aed7da48b650046d0" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.176196 4699 scope.go:117] "RemoveContainer" containerID="d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c" Feb 26 11:28:12 crc kubenswrapper[4699]: E0226 11:28:12.176914 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c\": container with ID starting with d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c not found: ID does not exist" containerID="d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.176951 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c"} err="failed to get container status \"d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c\": rpc error: code = NotFound desc = could not find container \"d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c\": container with ID starting with d70f1b379ebd7cd6ccf6ab55b5a33fec830b34554148591d7c2c273009123d7c not found: ID does not exist" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.176979 4699 scope.go:117] "RemoveContainer" containerID="9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4" Feb 26 11:28:12 crc kubenswrapper[4699]: E0226 11:28:12.177376 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4\": container with ID starting with 9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4 not found: ID does not exist" containerID="9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.177395 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4"} err="failed to get container status \"9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4\": rpc error: code = NotFound desc = could not find container \"9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4\": container with ID starting with 9732faae7de71e9de9c0e1daa7b9482435142bd3504f1581216d9da8919621e4 not found: ID does not exist" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.177410 4699 scope.go:117] "RemoveContainer" containerID="d21c4a1c2d1695fcffcda48cd540319b245b3271d7e80b7aed7da48b650046d0" Feb 26 11:28:12 crc kubenswrapper[4699]: E0226 11:28:12.177648 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21c4a1c2d1695fcffcda48cd540319b245b3271d7e80b7aed7da48b650046d0\": container with ID starting with d21c4a1c2d1695fcffcda48cd540319b245b3271d7e80b7aed7da48b650046d0 not found: ID does not exist" containerID="d21c4a1c2d1695fcffcda48cd540319b245b3271d7e80b7aed7da48b650046d0" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.177687 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21c4a1c2d1695fcffcda48cd540319b245b3271d7e80b7aed7da48b650046d0"} err="failed to get container status \"d21c4a1c2d1695fcffcda48cd540319b245b3271d7e80b7aed7da48b650046d0\": rpc error: code = NotFound desc = could not find container \"d21c4a1c2d1695fcffcda48cd540319b245b3271d7e80b7aed7da48b650046d0\": container with ID starting with d21c4a1c2d1695fcffcda48cd540319b245b3271d7e80b7aed7da48b650046d0 not found: ID does not exist" Feb 26 11:28:12 crc kubenswrapper[4699]: I0226 11:28:12.268385 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" path="/var/lib/kubelet/pods/a164bc1f-d5f1-4538-86c7-98edbe73d0af/volumes" Feb 26 11:28:18 crc kubenswrapper[4699]: I0226 11:28:18.014796 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:18 crc kubenswrapper[4699]: I0226 11:28:18.015456 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:18 crc kubenswrapper[4699]: I0226 11:28:18.052716 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:18 crc kubenswrapper[4699]: I0226 11:28:18.186323 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:20 crc kubenswrapper[4699]: I0226 11:28:20.291610 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5dgj"] Feb 26 11:28:20 crc kubenswrapper[4699]: I0226 11:28:20.292218 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s5dgj" podUID="f9301b29-bf1c-45bc-9192-d9513b5b0726" containerName="registry-server" containerID="cri-o://3bbe57bc2e73648e372826fca5459292ff76808cee8395184b212e51692ed135" gracePeriod=2 Feb 26 11:28:21 crc kubenswrapper[4699]: I0226 11:28:21.108356 4699 scope.go:117] "RemoveContainer" containerID="13f8b1b98d014497027ee7037eac5f0ce1bbfdb9879bcfae0154cb4a61717ad1" Feb 26 11:28:21 crc kubenswrapper[4699]: I0226 11:28:21.171364 4699 generic.go:334] "Generic (PLEG): container finished" podID="f9301b29-bf1c-45bc-9192-d9513b5b0726" containerID="3bbe57bc2e73648e372826fca5459292ff76808cee8395184b212e51692ed135" exitCode=0 Feb 26 11:28:21 crc kubenswrapper[4699]: I0226 11:28:21.171446 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5dgj" event={"ID":"f9301b29-bf1c-45bc-9192-d9513b5b0726","Type":"ContainerDied","Data":"3bbe57bc2e73648e372826fca5459292ff76808cee8395184b212e51692ed135"} Feb 26 11:28:21 crc kubenswrapper[4699]: I0226 11:28:21.909820 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.070546 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-catalog-content\") pod \"f9301b29-bf1c-45bc-9192-d9513b5b0726\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.070639 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t2hz\" (UniqueName: \"kubernetes.io/projected/f9301b29-bf1c-45bc-9192-d9513b5b0726-kube-api-access-4t2hz\") pod \"f9301b29-bf1c-45bc-9192-d9513b5b0726\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.070727 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-utilities\") pod \"f9301b29-bf1c-45bc-9192-d9513b5b0726\" (UID: \"f9301b29-bf1c-45bc-9192-d9513b5b0726\") " Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.071951 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-utilities" (OuterVolumeSpecName: "utilities") pod "f9301b29-bf1c-45bc-9192-d9513b5b0726" (UID: "f9301b29-bf1c-45bc-9192-d9513b5b0726"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.077280 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9301b29-bf1c-45bc-9192-d9513b5b0726-kube-api-access-4t2hz" (OuterVolumeSpecName: "kube-api-access-4t2hz") pod "f9301b29-bf1c-45bc-9192-d9513b5b0726" (UID: "f9301b29-bf1c-45bc-9192-d9513b5b0726"). InnerVolumeSpecName "kube-api-access-4t2hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.172824 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.172857 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t2hz\" (UniqueName: \"kubernetes.io/projected/f9301b29-bf1c-45bc-9192-d9513b5b0726-kube-api-access-4t2hz\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.183940 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5dgj" event={"ID":"f9301b29-bf1c-45bc-9192-d9513b5b0726","Type":"ContainerDied","Data":"89b87751f0ad4bddeec5fe0cb86b72e858b80a42b2279b3ac84b208c0e16ab83"} Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.183969 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5dgj" Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.184001 4699 scope.go:117] "RemoveContainer" containerID="3bbe57bc2e73648e372826fca5459292ff76808cee8395184b212e51692ed135" Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.202161 4699 scope.go:117] "RemoveContainer" containerID="eb6628a2b056813a322b32dff4bfd3967ebe2e37d7c7dfee12782f9f18908173" Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.221174 4699 scope.go:117] "RemoveContainer" containerID="7960725ec9599c885caf90f1afa64d37fa9657019482f56a5ed54538ab6dd21e" Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.680395 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9301b29-bf1c-45bc-9192-d9513b5b0726" (UID: "f9301b29-bf1c-45bc-9192-d9513b5b0726"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.779904 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9301b29-bf1c-45bc-9192-d9513b5b0726-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.811382 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5dgj"] Feb 26 11:28:22 crc kubenswrapper[4699]: I0226 11:28:22.816805 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s5dgj"] Feb 26 11:28:24 crc kubenswrapper[4699]: I0226 11:28:24.272656 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9301b29-bf1c-45bc-9192-d9513b5b0726" path="/var/lib/kubelet/pods/f9301b29-bf1c-45bc-9192-d9513b5b0726/volumes" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.759543 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9"] Feb 26 11:28:29 crc kubenswrapper[4699]: E0226 11:28:29.761166 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" containerName="extract-content" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.761188 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" containerName="extract-content" Feb 26 11:28:29 crc kubenswrapper[4699]: E0226 11:28:29.761203 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9301b29-bf1c-45bc-9192-d9513b5b0726" containerName="extract-content" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.761211 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9301b29-bf1c-45bc-9192-d9513b5b0726" containerName="extract-content" Feb 26 11:28:29 crc kubenswrapper[4699]: E0226 11:28:29.761230 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" containerName="registry-server" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.761237 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" containerName="registry-server" Feb 26 11:28:29 crc kubenswrapper[4699]: E0226 11:28:29.761248 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9301b29-bf1c-45bc-9192-d9513b5b0726" containerName="registry-server" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.761255 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9301b29-bf1c-45bc-9192-d9513b5b0726" containerName="registry-server" Feb 26 11:28:29 crc kubenswrapper[4699]: E0226 11:28:29.761265 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9301b29-bf1c-45bc-9192-d9513b5b0726" containerName="extract-utilities" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.761273 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9301b29-bf1c-45bc-9192-d9513b5b0726" containerName="extract-utilities" Feb 26 11:28:29 crc kubenswrapper[4699]: E0226 11:28:29.761281 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" containerName="extract-utilities" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.761287 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" containerName="extract-utilities" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.761405 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="a164bc1f-d5f1-4538-86c7-98edbe73d0af" containerName="registry-server" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.761420 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9301b29-bf1c-45bc-9192-d9513b5b0726" containerName="registry-server" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.761857 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.763254 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lhv6d" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.764439 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.765366 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.767415 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-9gzn2" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.771661 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.775643 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn9kw\" (UniqueName: \"kubernetes.io/projected/35555f68-d5c4-44b2-9dfa-af5f91f57c7c-kube-api-access-gn9kw\") pod \"cinder-operator-controller-manager-55d77d7b5c-xw85z\" (UID: \"35555f68-d5c4-44b2-9dfa-af5f91f57c7c\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.775780 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwvsl\" (UniqueName: \"kubernetes.io/projected/1814471e-5f82-4464-9528-75da66d7235b-kube-api-access-nwvsl\") pod \"barbican-operator-controller-manager-868647ff47-sndb9\" (UID: \"1814471e-5f82-4464-9528-75da66d7235b\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.781165 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.787088 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.788222 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.803884 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-87dx4" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.811572 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.812505 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.816316 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-qfq8w" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.844423 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.864068 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.877636 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.878230 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn9kw\" (UniqueName: \"kubernetes.io/projected/35555f68-d5c4-44b2-9dfa-af5f91f57c7c-kube-api-access-gn9kw\") pod \"cinder-operator-controller-manager-55d77d7b5c-xw85z\" (UID: \"35555f68-d5c4-44b2-9dfa-af5f91f57c7c\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.878316 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cnkw\" (UniqueName: \"kubernetes.io/projected/07c2552c-8182-4cfe-a397-39ad287029e5-kube-api-access-2cnkw\") pod \"designate-operator-controller-manager-6d8bf5c495-4k4sm\" (UID: \"07c2552c-8182-4cfe-a397-39ad287029e5\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.878354 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwnhp\" (UniqueName: \"kubernetes.io/projected/27e251bb-8f9b-48d4-9ea3-81d03fd85244-kube-api-access-gwnhp\") pod \"glance-operator-controller-manager-784b5bb6c5-jh7vz\" (UID: \"27e251bb-8f9b-48d4-9ea3-81d03fd85244\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.878385 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwvsl\" (UniqueName: \"kubernetes.io/projected/1814471e-5f82-4464-9528-75da66d7235b-kube-api-access-nwvsl\") pod \"barbican-operator-controller-manager-868647ff47-sndb9\" (UID: \"1814471e-5f82-4464-9528-75da66d7235b\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.878925 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.881959 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-vxg4r" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.919303 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwvsl\" (UniqueName: \"kubernetes.io/projected/1814471e-5f82-4464-9528-75da66d7235b-kube-api-access-nwvsl\") pod \"barbican-operator-controller-manager-868647ff47-sndb9\" (UID: \"1814471e-5f82-4464-9528-75da66d7235b\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.919374 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn9kw\" (UniqueName: \"kubernetes.io/projected/35555f68-d5c4-44b2-9dfa-af5f91f57c7c-kube-api-access-gn9kw\") pod \"cinder-operator-controller-manager-55d77d7b5c-xw85z\" (UID: \"35555f68-d5c4-44b2-9dfa-af5f91f57c7c\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.940086 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.940917 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.946322 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-wsnzb" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.946833 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.974328 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.979378 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2d2\" (UniqueName: \"kubernetes.io/projected/7b204025-d5ff-4c74-96b9-6774b62e0cc4-kube-api-access-pj2d2\") pod \"heat-operator-controller-manager-69f49c598c-t8c9f\" (UID: \"7b204025-d5ff-4c74-96b9-6774b62e0cc4\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.979471 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrm4v\" (UniqueName: \"kubernetes.io/projected/619dff06-7255-4aab-9ffe-9f2561bcc904-kube-api-access-wrm4v\") pod \"horizon-operator-controller-manager-5b9b8895d5-qf9vd\" (UID: \"619dff06-7255-4aab-9ffe-9f2561bcc904\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.979507 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cnkw\" (UniqueName: \"kubernetes.io/projected/07c2552c-8182-4cfe-a397-39ad287029e5-kube-api-access-2cnkw\") pod \"designate-operator-controller-manager-6d8bf5c495-4k4sm\" (UID: \"07c2552c-8182-4cfe-a397-39ad287029e5\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.979554 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwnhp\" (UniqueName: \"kubernetes.io/projected/27e251bb-8f9b-48d4-9ea3-81d03fd85244-kube-api-access-gwnhp\") pod \"glance-operator-controller-manager-784b5bb6c5-jh7vz\" (UID: \"27e251bb-8f9b-48d4-9ea3-81d03fd85244\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.980234 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.981230 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.987302 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p"] Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.988138 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5mgbn" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.988287 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 26 11:28:29 crc kubenswrapper[4699]: I0226 11:28:29.989778 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.000215 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.002505 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-mqvph" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.023714 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cnkw\" (UniqueName: \"kubernetes.io/projected/07c2552c-8182-4cfe-a397-39ad287029e5-kube-api-access-2cnkw\") pod \"designate-operator-controller-manager-6d8bf5c495-4k4sm\" (UID: \"07c2552c-8182-4cfe-a397-39ad287029e5\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.028701 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.040573 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwnhp\" (UniqueName: \"kubernetes.io/projected/27e251bb-8f9b-48d4-9ea3-81d03fd85244-kube-api-access-gwnhp\") pod \"glance-operator-controller-manager-784b5bb6c5-jh7vz\" (UID: \"27e251bb-8f9b-48d4-9ea3-81d03fd85244\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.080294 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj2d2\" (UniqueName: \"kubernetes.io/projected/7b204025-d5ff-4c74-96b9-6774b62e0cc4-kube-api-access-pj2d2\") pod \"heat-operator-controller-manager-69f49c598c-t8c9f\" (UID: \"7b204025-d5ff-4c74-96b9-6774b62e0cc4\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.080367 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jwl\" (UniqueName: \"kubernetes.io/projected/afbeb2d8-c332-447b-a931-9fe7b246914d-kube-api-access-s5jwl\") pod \"infra-operator-controller-manager-79d975b745-mtrs6\" (UID: \"afbeb2d8-c332-447b-a931-9fe7b246914d\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.080423 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrm4v\" (UniqueName: \"kubernetes.io/projected/619dff06-7255-4aab-9ffe-9f2561bcc904-kube-api-access-wrm4v\") pod \"horizon-operator-controller-manager-5b9b8895d5-qf9vd\" (UID: \"619dff06-7255-4aab-9ffe-9f2561bcc904\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.080466 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fjxs\" (UniqueName: \"kubernetes.io/projected/d56efcbf-3414-4bd1-9cbf-d56c434ac529-kube-api-access-2fjxs\") pod \"ironic-operator-controller-manager-554564d7fc-5k85p\" (UID: \"d56efcbf-3414-4bd1-9cbf-d56c434ac529\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.080491 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert\") pod \"infra-operator-controller-manager-79d975b745-mtrs6\" (UID: \"afbeb2d8-c332-447b-a931-9fe7b246914d\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.085337 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.086184 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.097884 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-v465z" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.107169 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.108178 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.110554 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.119598 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-6jz96" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.120054 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.120834 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.123731 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9d64c" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.127083 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.139405 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.139871 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.157494 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.158553 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrm4v\" (UniqueName: \"kubernetes.io/projected/619dff06-7255-4aab-9ffe-9f2561bcc904-kube-api-access-wrm4v\") pod \"horizon-operator-controller-manager-5b9b8895d5-qf9vd\" (UID: \"619dff06-7255-4aab-9ffe-9f2561bcc904\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.173161 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.175806 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.176998 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.181234 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj2d2\" (UniqueName: \"kubernetes.io/projected/7b204025-d5ff-4c74-96b9-6774b62e0cc4-kube-api-access-pj2d2\") pod \"heat-operator-controller-manager-69f49c598c-t8c9f\" (UID: \"7b204025-d5ff-4c74-96b9-6774b62e0cc4\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.181824 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kgcg\" (UniqueName: \"kubernetes.io/projected/caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2-kube-api-access-8kgcg\") pod \"manila-operator-controller-manager-67d996989d-9gwwj\" (UID: \"caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.181860 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h4qj\" (UniqueName: \"kubernetes.io/projected/38eef260-c32f-4568-9936-6197ba984f05-kube-api-access-5h4qj\") pod \"mariadb-operator-controller-manager-6994f66f48-95whc\" (UID: \"38eef260-c32f-4568-9936-6197ba984f05\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.181894 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fjxs\" (UniqueName: \"kubernetes.io/projected/d56efcbf-3414-4bd1-9cbf-d56c434ac529-kube-api-access-2fjxs\") pod \"ironic-operator-controller-manager-554564d7fc-5k85p\" (UID: \"d56efcbf-3414-4bd1-9cbf-d56c434ac529\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.181928 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert\") pod \"infra-operator-controller-manager-79d975b745-mtrs6\" (UID: \"afbeb2d8-c332-447b-a931-9fe7b246914d\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.181960 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5jwl\" (UniqueName: \"kubernetes.io/projected/afbeb2d8-c332-447b-a931-9fe7b246914d-kube-api-access-s5jwl\") pod \"infra-operator-controller-manager-79d975b745-mtrs6\" (UID: \"afbeb2d8-c332-447b-a931-9fe7b246914d\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.181994 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6zf\" (UniqueName: \"kubernetes.io/projected/a2c419ab-2a99-4d37-b46c-b84024f24b2e-kube-api-access-mb6zf\") pod \"keystone-operator-controller-manager-b4d948c87-d2pxc\" (UID: \"a2c419ab-2a99-4d37-b46c-b84024f24b2e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" Feb 26 11:28:30 crc kubenswrapper[4699]: E0226 11:28:30.182530 4699 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 11:28:30 crc kubenswrapper[4699]: E0226 11:28:30.182589 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert podName:afbeb2d8-c332-447b-a931-9fe7b246914d nodeName:}" failed. No retries permitted until 2026-02-26 11:28:30.682567414 +0000 UTC m=+1056.493393848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert") pod "infra-operator-controller-manager-79d975b745-mtrs6" (UID: "afbeb2d8-c332-447b-a931-9fe7b246914d") : secret "infra-operator-webhook-server-cert" not found Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.184355 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.185430 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.189348 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gltb9" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.205431 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.205725 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.224458 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.224500 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.247668 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.272887 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-6d6ph" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.273300 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kllrb" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.287243 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.293865 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.317830 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fjxs\" (UniqueName: \"kubernetes.io/projected/d56efcbf-3414-4bd1-9cbf-d56c434ac529-kube-api-access-2fjxs\") pod \"ironic-operator-controller-manager-554564d7fc-5k85p\" (UID: \"d56efcbf-3414-4bd1-9cbf-d56c434ac529\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.335284 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5jwl\" (UniqueName: \"kubernetes.io/projected/afbeb2d8-c332-447b-a931-9fe7b246914d-kube-api-access-s5jwl\") pod \"infra-operator-controller-manager-79d975b745-mtrs6\" (UID: \"afbeb2d8-c332-447b-a931-9fe7b246914d\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.386491 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kgcg\" (UniqueName: \"kubernetes.io/projected/caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2-kube-api-access-8kgcg\") pod \"manila-operator-controller-manager-67d996989d-9gwwj\" (UID: \"caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.386558 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h4qj\" (UniqueName: \"kubernetes.io/projected/38eef260-c32f-4568-9936-6197ba984f05-kube-api-access-5h4qj\") pod \"mariadb-operator-controller-manager-6994f66f48-95whc\" (UID: \"38eef260-c32f-4568-9936-6197ba984f05\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.387018 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb6zf\" (UniqueName: \"kubernetes.io/projected/a2c419ab-2a99-4d37-b46c-b84024f24b2e-kube-api-access-mb6zf\") pod \"keystone-operator-controller-manager-b4d948c87-d2pxc\" (UID: \"a2c419ab-2a99-4d37-b46c-b84024f24b2e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.403169 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.455631 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h4qj\" (UniqueName: \"kubernetes.io/projected/38eef260-c32f-4568-9936-6197ba984f05-kube-api-access-5h4qj\") pod \"mariadb-operator-controller-manager-6994f66f48-95whc\" (UID: \"38eef260-c32f-4568-9936-6197ba984f05\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.456341 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb6zf\" (UniqueName: \"kubernetes.io/projected/a2c419ab-2a99-4d37-b46c-b84024f24b2e-kube-api-access-mb6zf\") pod \"keystone-operator-controller-manager-b4d948c87-d2pxc\" (UID: \"a2c419ab-2a99-4d37-b46c-b84024f24b2e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.480041 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kgcg\" (UniqueName: \"kubernetes.io/projected/caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2-kube-api-access-8kgcg\") pod \"manila-operator-controller-manager-67d996989d-9gwwj\" (UID: \"caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.489276 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vmk9\" (UniqueName: \"kubernetes.io/projected/0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee-kube-api-access-7vmk9\") pod \"nova-operator-controller-manager-567668f5cf-4mghs\" (UID: \"0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.489328 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2wxf\" (UniqueName: \"kubernetes.io/projected/54959b79-361c-415a-986d-1af6d8eb6701-kube-api-access-g2wxf\") pod \"neutron-operator-controller-manager-6bd4687957-6gblm\" (UID: \"54959b79-361c-415a-986d-1af6d8eb6701\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.489372 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97rqs\" (UniqueName: \"kubernetes.io/projected/a6e7ca85-e18b-4605-9180-316f65b82006-kube-api-access-97rqs\") pod \"octavia-operator-controller-manager-659dc6bbfc-2wj2n\" (UID: \"a6e7ca85-e18b-4605-9180-316f65b82006\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.514550 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.523362 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.525600 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.532154 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.532617 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-np2lz" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.570426 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.577940 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.592590 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vmk9\" (UniqueName: \"kubernetes.io/projected/0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee-kube-api-access-7vmk9\") pod \"nova-operator-controller-manager-567668f5cf-4mghs\" (UID: \"0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.592654 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2wxf\" (UniqueName: \"kubernetes.io/projected/54959b79-361c-415a-986d-1af6d8eb6701-kube-api-access-g2wxf\") pod \"neutron-operator-controller-manager-6bd4687957-6gblm\" (UID: \"54959b79-361c-415a-986d-1af6d8eb6701\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.592707 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97rqs\" (UniqueName: \"kubernetes.io/projected/a6e7ca85-e18b-4605-9180-316f65b82006-kube-api-access-97rqs\") pod \"octavia-operator-controller-manager-659dc6bbfc-2wj2n\" (UID: \"a6e7ca85-e18b-4605-9180-316f65b82006\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.593796 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-96png"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.596369 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-96png" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.597928 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.601503 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vgngh" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.614067 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.620205 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.622340 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fflsw" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.644070 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-96png"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.646392 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vmk9\" (UniqueName: \"kubernetes.io/projected/0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee-kube-api-access-7vmk9\") pod \"nova-operator-controller-manager-567668f5cf-4mghs\" (UID: \"0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.646392 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2wxf\" (UniqueName: \"kubernetes.io/projected/54959b79-361c-415a-986d-1af6d8eb6701-kube-api-access-g2wxf\") pod \"neutron-operator-controller-manager-6bd4687957-6gblm\" (UID: \"54959b79-361c-415a-986d-1af6d8eb6701\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.647372 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97rqs\" (UniqueName: \"kubernetes.io/projected/a6e7ca85-e18b-4605-9180-316f65b82006-kube-api-access-97rqs\") pod \"octavia-operator-controller-manager-659dc6bbfc-2wj2n\" (UID: \"a6e7ca85-e18b-4605-9180-316f65b82006\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.651606 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.679939 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.694839 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qzxb\" (UniqueName: \"kubernetes.io/projected/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-kube-api-access-5qzxb\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb\" (UID: \"ce7c40ca-05ad-49ca-a091-02ac588c3eb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.694907 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb\" (UID: \"ce7c40ca-05ad-49ca-a091-02ac588c3eb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.695000 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert\") pod \"infra-operator-controller-manager-79d975b745-mtrs6\" (UID: \"afbeb2d8-c332-447b-a931-9fe7b246914d\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.695029 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpsrr\" (UniqueName: \"kubernetes.io/projected/a90c4025-7bd1-401b-8f92-5f15a58fb3d6-kube-api-access-tpsrr\") pod \"ovn-operator-controller-manager-5955d8c787-96png\" (UID: \"a90c4025-7bd1-401b-8f92-5f15a58fb3d6\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-96png" Feb 26 11:28:30 crc kubenswrapper[4699]: E0226 11:28:30.695221 4699 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 11:28:30 crc kubenswrapper[4699]: E0226 11:28:30.695299 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert podName:afbeb2d8-c332-447b-a931-9fe7b246914d nodeName:}" failed. No retries permitted until 2026-02-26 11:28:31.695280028 +0000 UTC m=+1057.506106462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert") pod "infra-operator-controller-manager-79d975b745-mtrs6" (UID: "afbeb2d8-c332-447b-a931-9fe7b246914d") : secret "infra-operator-webhook-server-cert" not found Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.698297 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.699345 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.702014 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-j6gsg" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.719721 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.734214 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.748015 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.758964 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.760087 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.764701 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-fmhd6" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.764896 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.776314 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.777236 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.781085 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nts9g" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.781247 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.788830 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.789177 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.789964 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.793507 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qzv2j" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.797785 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28f6x\" (UniqueName: \"kubernetes.io/projected/7545763d-d2d2-4b6e-980d-737062f0a894-kube-api-access-28f6x\") pod \"placement-operator-controller-manager-8497b45c89-jxr77\" (UID: \"7545763d-d2d2-4b6e-980d-737062f0a894\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.797875 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpsrr\" (UniqueName: \"kubernetes.io/projected/a90c4025-7bd1-401b-8f92-5f15a58fb3d6-kube-api-access-tpsrr\") pod \"ovn-operator-controller-manager-5955d8c787-96png\" (UID: \"a90c4025-7bd1-401b-8f92-5f15a58fb3d6\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-96png" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.798014 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qzxb\" (UniqueName: \"kubernetes.io/projected/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-kube-api-access-5qzxb\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb\" (UID: \"ce7c40ca-05ad-49ca-a091-02ac588c3eb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.798075 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb\" (UID: \"ce7c40ca-05ad-49ca-a091-02ac588c3eb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.798100 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5nk9\" (UniqueName: \"kubernetes.io/projected/33fc0a61-18c9-4e80-b898-92a5b1b71dac-kube-api-access-p5nk9\") pod \"swift-operator-controller-manager-68f46476f-bqvxr\" (UID: \"33fc0a61-18c9-4e80-b898-92a5b1b71dac\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" Feb 26 11:28:30 crc kubenswrapper[4699]: E0226 11:28:30.800871 4699 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:28:30 crc kubenswrapper[4699]: E0226 11:28:30.800921 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert podName:ce7c40ca-05ad-49ca-a091-02ac588c3eb7 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:31.300906684 +0000 UTC m=+1057.111733118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" (UID: "ce7c40ca-05ad-49ca-a091-02ac588c3eb7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.801032 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.832214 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpsrr\" (UniqueName: \"kubernetes.io/projected/a90c4025-7bd1-401b-8f92-5f15a58fb3d6-kube-api-access-tpsrr\") pod \"ovn-operator-controller-manager-5955d8c787-96png\" (UID: \"a90c4025-7bd1-401b-8f92-5f15a58fb3d6\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-96png" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.841335 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.845988 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qzxb\" (UniqueName: \"kubernetes.io/projected/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-kube-api-access-5qzxb\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb\" (UID: \"ce7c40ca-05ad-49ca-a091-02ac588c3eb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.859827 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.859922 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.861844 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.862273 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.862731 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-x6dfs" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.868230 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.869083 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.870229 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-96png" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.871886 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-p5ttl" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.899504 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkj4s\" (UniqueName: \"kubernetes.io/projected/a2b3bf3b-a815-4033-983b-eedc16b8609f-kube-api-access-lkj4s\") pod \"watcher-operator-controller-manager-bccc79885-fnnc7\" (UID: \"a2b3bf3b-a815-4033-983b-eedc16b8609f\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.899568 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hdkk\" (UniqueName: \"kubernetes.io/projected/5be0c14a-e51f-4b69-ab58-c0cac66910e2-kube-api-access-6hdkk\") pod \"test-operator-controller-manager-5dc6794d5b-mwvnr\" (UID: \"5be0c14a-e51f-4b69-ab58-c0cac66910e2\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.899592 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsfpb\" (UniqueName: \"kubernetes.io/projected/15255a9b-0767-4518-8e81-ca9044f9190a-kube-api-access-wsfpb\") pod \"telemetry-operator-controller-manager-589c568786-f9kz5\" (UID: \"15255a9b-0767-4518-8e81-ca9044f9190a\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.899640 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5nk9\" (UniqueName: \"kubernetes.io/projected/33fc0a61-18c9-4e80-b898-92a5b1b71dac-kube-api-access-p5nk9\") pod \"swift-operator-controller-manager-68f46476f-bqvxr\" (UID: \"33fc0a61-18c9-4e80-b898-92a5b1b71dac\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.899680 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28f6x\" (UniqueName: \"kubernetes.io/projected/7545763d-d2d2-4b6e-980d-737062f0a894-kube-api-access-28f6x\") pod \"placement-operator-controller-manager-8497b45c89-jxr77\" (UID: \"7545763d-d2d2-4b6e-980d-737062f0a894\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.908493 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4"] Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.933259 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28f6x\" (UniqueName: \"kubernetes.io/projected/7545763d-d2d2-4b6e-980d-737062f0a894-kube-api-access-28f6x\") pod \"placement-operator-controller-manager-8497b45c89-jxr77\" (UID: \"7545763d-d2d2-4b6e-980d-737062f0a894\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" Feb 26 11:28:30 crc kubenswrapper[4699]: I0226 11:28:30.933785 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5nk9\" (UniqueName: \"kubernetes.io/projected/33fc0a61-18c9-4e80-b898-92a5b1b71dac-kube-api-access-p5nk9\") pod \"swift-operator-controller-manager-68f46476f-bqvxr\" (UID: \"33fc0a61-18c9-4e80-b898-92a5b1b71dac\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.002019 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4r4l\" (UniqueName: \"kubernetes.io/projected/ebf1a568-be30-4ceb-bc67-e3158a0280b9-kube-api-access-z4r4l\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.002354 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.002418 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.002453 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkj4s\" (UniqueName: \"kubernetes.io/projected/a2b3bf3b-a815-4033-983b-eedc16b8609f-kube-api-access-lkj4s\") pod \"watcher-operator-controller-manager-bccc79885-fnnc7\" (UID: \"a2b3bf3b-a815-4033-983b-eedc16b8609f\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.002482 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2sjn\" (UniqueName: \"kubernetes.io/projected/8d440653-f1c3-483c-a37d-463dcfc15224-kube-api-access-d2sjn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ghqf4\" (UID: \"8d440653-f1c3-483c-a37d-463dcfc15224\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.002515 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hdkk\" (UniqueName: \"kubernetes.io/projected/5be0c14a-e51f-4b69-ab58-c0cac66910e2-kube-api-access-6hdkk\") pod \"test-operator-controller-manager-5dc6794d5b-mwvnr\" (UID: \"5be0c14a-e51f-4b69-ab58-c0cac66910e2\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.002542 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsfpb\" (UniqueName: \"kubernetes.io/projected/15255a9b-0767-4518-8e81-ca9044f9190a-kube-api-access-wsfpb\") pod \"telemetry-operator-controller-manager-589c568786-f9kz5\" (UID: \"15255a9b-0767-4518-8e81-ca9044f9190a\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.046249 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkj4s\" (UniqueName: \"kubernetes.io/projected/a2b3bf3b-a815-4033-983b-eedc16b8609f-kube-api-access-lkj4s\") pod \"watcher-operator-controller-manager-bccc79885-fnnc7\" (UID: \"a2b3bf3b-a815-4033-983b-eedc16b8609f\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.059179 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hdkk\" (UniqueName: \"kubernetes.io/projected/5be0c14a-e51f-4b69-ab58-c0cac66910e2-kube-api-access-6hdkk\") pod \"test-operator-controller-manager-5dc6794d5b-mwvnr\" (UID: \"5be0c14a-e51f-4b69-ab58-c0cac66910e2\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.063686 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsfpb\" (UniqueName: \"kubernetes.io/projected/15255a9b-0767-4518-8e81-ca9044f9190a-kube-api-access-wsfpb\") pod \"telemetry-operator-controller-manager-589c568786-f9kz5\" (UID: \"15255a9b-0767-4518-8e81-ca9044f9190a\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.103878 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.103965 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.104001 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2sjn\" (UniqueName: \"kubernetes.io/projected/8d440653-f1c3-483c-a37d-463dcfc15224-kube-api-access-d2sjn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ghqf4\" (UID: \"8d440653-f1c3-483c-a37d-463dcfc15224\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.104106 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4r4l\" (UniqueName: \"kubernetes.io/projected/ebf1a568-be30-4ceb-bc67-e3158a0280b9-kube-api-access-z4r4l\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.104171 4699 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.104250 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs podName:ebf1a568-be30-4ceb-bc67-e3158a0280b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:31.60423057 +0000 UTC m=+1057.415057014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs") pod "openstack-operator-controller-manager-947f4f86b-m69sv" (UID: "ebf1a568-be30-4ceb-bc67-e3158a0280b9") : secret "metrics-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.104188 4699 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.104510 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs podName:ebf1a568-be30-4ceb-bc67-e3158a0280b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:31.604490277 +0000 UTC m=+1057.415316711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs") pod "openstack-operator-controller-manager-947f4f86b-m69sv" (UID: "ebf1a568-be30-4ceb-bc67-e3158a0280b9") : secret "webhook-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.132415 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2sjn\" (UniqueName: \"kubernetes.io/projected/8d440653-f1c3-483c-a37d-463dcfc15224-kube-api-access-d2sjn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ghqf4\" (UID: \"8d440653-f1c3-483c-a37d-463dcfc15224\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.144604 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4r4l\" (UniqueName: \"kubernetes.io/projected/ebf1a568-be30-4ceb-bc67-e3158a0280b9-kube-api-access-z4r4l\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.199015 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.222184 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.242885 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.272457 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.310808 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb\" (UID: \"ce7c40ca-05ad-49ca-a091-02ac588c3eb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.311584 4699 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.311648 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert podName:ce7c40ca-05ad-49ca-a091-02ac588c3eb7 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:32.311628563 +0000 UTC m=+1058.122454997 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" (UID: "ce7c40ca-05ad-49ca-a091-02ac588c3eb7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.373213 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.421823 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.617942 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.618036 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.618171 4699 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.618231 4699 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.618258 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs podName:ebf1a568-be30-4ceb-bc67-e3158a0280b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:32.61823233 +0000 UTC m=+1058.429058764 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs") pod "openstack-operator-controller-manager-947f4f86b-m69sv" (UID: "ebf1a568-be30-4ceb-bc67-e3158a0280b9") : secret "webhook-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.618302 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs podName:ebf1a568-be30-4ceb-bc67-e3158a0280b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:32.618285071 +0000 UTC m=+1058.429111585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs") pod "openstack-operator-controller-manager-947f4f86b-m69sv" (UID: "ebf1a568-be30-4ceb-bc67-e3158a0280b9") : secret "metrics-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.630292 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f"] Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.635407 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9"] Feb 26 11:28:31 crc kubenswrapper[4699]: W0226 11:28:31.636964 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b204025_d5ff_4c74_96b9_6774b62e0cc4.slice/crio-8c4fb8e66f8f1dfface81fc9e61cadc1e57ce5df81ab31b96de495e237aaa5a4 WatchSource:0}: Error finding container 8c4fb8e66f8f1dfface81fc9e61cadc1e57ce5df81ab31b96de495e237aaa5a4: Status 404 returned error can't find the container with id 8c4fb8e66f8f1dfface81fc9e61cadc1e57ce5df81ab31b96de495e237aaa5a4 Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.722268 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert\") pod \"infra-operator-controller-manager-79d975b745-mtrs6\" (UID: \"afbeb2d8-c332-447b-a931-9fe7b246914d\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.722485 4699 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: E0226 11:28:31.722558 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert podName:afbeb2d8-c332-447b-a931-9fe7b246914d nodeName:}" failed. No retries permitted until 2026-02-26 11:28:33.722523258 +0000 UTC m=+1059.533349692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert") pod "infra-operator-controller-manager-79d975b745-mtrs6" (UID: "afbeb2d8-c332-447b-a931-9fe7b246914d") : secret "infra-operator-webhook-server-cert" not found Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.773612 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p"] Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.806042 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz"] Feb 26 11:28:31 crc kubenswrapper[4699]: I0226 11:28:31.960406 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z"] Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.025173 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm"] Feb 26 11:28:32 crc kubenswrapper[4699]: W0226 11:28:32.036393 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07c2552c_8182_4cfe_a397_39ad287029e5.slice/crio-2a5aaba4b6ff0ac20e706fa4cb48c18ea61316825ddc6a5ac1e78e37ff5a21eb WatchSource:0}: Error finding container 2a5aaba4b6ff0ac20e706fa4cb48c18ea61316825ddc6a5ac1e78e37ff5a21eb: Status 404 returned error can't find the container with id 2a5aaba4b6ff0ac20e706fa4cb48c18ea61316825ddc6a5ac1e78e37ff5a21eb Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.083878 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs"] Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.117919 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-96png"] Feb 26 11:28:32 crc kubenswrapper[4699]: W0226 11:28:32.126036 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda90c4025_7bd1_401b_8f92_5f15a58fb3d6.slice/crio-d6aa0057c32f13b3e8adb701b0ee1ce87290118b32e08f2e34242c3151275b37 WatchSource:0}: Error finding container d6aa0057c32f13b3e8adb701b0ee1ce87290118b32e08f2e34242c3151275b37: Status 404 returned error can't find the container with id d6aa0057c32f13b3e8adb701b0ee1ce87290118b32e08f2e34242c3151275b37 Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.226554 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n"] Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.238477 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd"] Feb 26 11:28:32 crc kubenswrapper[4699]: W0226 11:28:32.253056 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6e7ca85_e18b_4605_9180_316f65b82006.slice/crio-aa96c261f60037df1a40fde6b38bfd921f422a546b721874f9515049a4d51ea0 WatchSource:0}: Error finding container aa96c261f60037df1a40fde6b38bfd921f422a546b721874f9515049a4d51ea0: Status 404 returned error can't find the container with id aa96c261f60037df1a40fde6b38bfd921f422a546b721874f9515049a4d51ea0 Feb 26 11:28:32 crc kubenswrapper[4699]: W0226 11:28:32.259282 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod619dff06_7255_4aab_9ffe_9f2561bcc904.slice/crio-db102e99dacd7e0f310a45a6a04234e31237ab1cbcd6c93742ded6628d3cb9d1 WatchSource:0}: Error finding container db102e99dacd7e0f310a45a6a04234e31237ab1cbcd6c93742ded6628d3cb9d1: Status 404 returned error can't find the container with id db102e99dacd7e0f310a45a6a04234e31237ab1cbcd6c93742ded6628d3cb9d1 Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.325788 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc"] Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.325835 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj"] Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.325846 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr"] Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.325855 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77"] Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.325865 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5"] Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.325875 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc"] Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.327534 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr"] Feb 26 11:28:32 crc kubenswrapper[4699]: W0226 11:28:32.328347 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7545763d_d2d2_4b6e_980d_737062f0a894.slice/crio-a45f38838029fc27dc6caa1da4ec104a117a6feb671d57bac438346127d9e56f WatchSource:0}: Error finding container a45f38838029fc27dc6caa1da4ec104a117a6feb671d57bac438346127d9e56f: Status 404 returned error can't find the container with id a45f38838029fc27dc6caa1da4ec104a117a6feb671d57bac438346127d9e56f Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.329501 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb\" (UID: \"ce7c40ca-05ad-49ca-a091-02ac588c3eb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.330361 4699 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.330417 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert podName:ce7c40ca-05ad-49ca-a091-02ac588c3eb7 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:34.330398403 +0000 UTC m=+1060.141224837 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" (UID: "ce7c40ca-05ad-49ca-a091-02ac588c3eb7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.333666 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm"] Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.344535 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kgcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-67d996989d-9gwwj_openstack-operators(caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.345676 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" podUID="caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2" Feb 26 11:28:32 crc kubenswrapper[4699]: W0226 11:28:32.347865 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38eef260_c32f_4568_9936_6197ba984f05.slice/crio-03434c2603ce983f1589e4c04fcd928073db2a119d9fab5c4102bad464c1649a WatchSource:0}: Error finding container 03434c2603ce983f1589e4c04fcd928073db2a119d9fab5c4102bad464c1649a: Status 404 returned error can't find the container with id 03434c2603ce983f1589e4c04fcd928073db2a119d9fab5c4102bad464c1649a Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.356129 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5h4qj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-95whc_openstack-operators(38eef260-c32f-4568-9936-6197ba984f05): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.357508 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5nk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-bqvxr_openstack-operators(33fc0a61-18c9-4e80-b898-92a5b1b71dac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.357632 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" podUID="38eef260-c32f-4568-9936-6197ba984f05" Feb 26 11:28:32 crc kubenswrapper[4699]: W0226 11:28:32.358550 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5be0c14a_e51f_4b69_ab58_c0cac66910e2.slice/crio-c2fdf1600e84d512a1ae50393921855a8028a64733ca2a0a334b34e26257353b WatchSource:0}: Error finding container c2fdf1600e84d512a1ae50393921855a8028a64733ca2a0a334b34e26257353b: Status 404 returned error can't find the container with id c2fdf1600e84d512a1ae50393921855a8028a64733ca2a0a334b34e26257353b Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.358630 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" podUID="33fc0a61-18c9-4e80-b898-92a5b1b71dac" Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.364342 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g2wxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6bd4687957-6gblm_openstack-operators(54959b79-361c-415a-986d-1af6d8eb6701): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.364408 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6hdkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5dc6794d5b-mwvnr_openstack-operators(5be0c14a-e51f-4b69-ab58-c0cac66910e2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.366178 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" podUID="5be0c14a-e51f-4b69-ab58-c0cac66910e2" Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.366264 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" podUID="54959b79-361c-415a-986d-1af6d8eb6701" Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.484936 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" event={"ID":"38eef260-c32f-4568-9936-6197ba984f05","Type":"ContainerStarted","Data":"03434c2603ce983f1589e4c04fcd928073db2a119d9fab5c4102bad464c1649a"} Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.486252 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" podUID="38eef260-c32f-4568-9936-6197ba984f05" Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.486718 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f" event={"ID":"7b204025-d5ff-4c74-96b9-6774b62e0cc4","Type":"ContainerStarted","Data":"8c4fb8e66f8f1dfface81fc9e61cadc1e57ce5df81ab31b96de495e237aaa5a4"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.490096 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" event={"ID":"a2c419ab-2a99-4d37-b46c-b84024f24b2e","Type":"ContainerStarted","Data":"6e84e8fdaecccbb4363b0e0667cab937ffa9a4a9f454839b6678898eef797324"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.492367 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9" event={"ID":"1814471e-5f82-4464-9528-75da66d7235b","Type":"ContainerStarted","Data":"b5b101ddedfa4b19b97f65b260a1ddb656db205a395b8f30f4de98903c761fb8"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.494206 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" event={"ID":"a6e7ca85-e18b-4605-9180-316f65b82006","Type":"ContainerStarted","Data":"aa96c261f60037df1a40fde6b38bfd921f422a546b721874f9515049a4d51ea0"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.495830 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" event={"ID":"5be0c14a-e51f-4b69-ab58-c0cac66910e2","Type":"ContainerStarted","Data":"c2fdf1600e84d512a1ae50393921855a8028a64733ca2a0a334b34e26257353b"} Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.497948 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" podUID="5be0c14a-e51f-4b69-ab58-c0cac66910e2" Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.512260 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5" event={"ID":"15255a9b-0767-4518-8e81-ca9044f9190a","Type":"ContainerStarted","Data":"7f66db4503c9cf87d41f0fc7e2f3a655fefe3e62ed6273462062d4ddfdebe629"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.515524 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" event={"ID":"0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee","Type":"ContainerStarted","Data":"0aa5533ef85057af20475853b15327d2f6b844961cfc0e9672c1d5cf095b950f"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.516846 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" event={"ID":"caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2","Type":"ContainerStarted","Data":"a6d65cc2c247074d7226b3de196e27eba5b0573bad2712ac68409f726d4f7e9c"} Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.522878 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" podUID="caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2" Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.549194 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-96png" event={"ID":"a90c4025-7bd1-401b-8f92-5f15a58fb3d6","Type":"ContainerStarted","Data":"d6aa0057c32f13b3e8adb701b0ee1ce87290118b32e08f2e34242c3151275b37"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.568381 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p" event={"ID":"d56efcbf-3414-4bd1-9cbf-d56c434ac529","Type":"ContainerStarted","Data":"6b0e33aeb1ddc4a7b24c34ab91d6bbd692c23ca70a55db1e39083b786c3cb891"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.573529 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" event={"ID":"33fc0a61-18c9-4e80-b898-92a5b1b71dac","Type":"ContainerStarted","Data":"f70bbd10421047762b4e3679725eeb5ed1d110a8262b0e1a765ea1618b2299a1"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.576068 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4"] Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.579536 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" podUID="33fc0a61-18c9-4e80-b898-92a5b1b71dac" Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.579856 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm" event={"ID":"07c2552c-8182-4cfe-a397-39ad287029e5","Type":"ContainerStarted","Data":"2a5aaba4b6ff0ac20e706fa4cb48c18ea61316825ddc6a5ac1e78e37ff5a21eb"} Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.580579 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d2sjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-ghqf4_openstack-operators(8d440653-f1c3-483c-a37d-463dcfc15224): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.581326 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz" event={"ID":"27e251bb-8f9b-48d4-9ea3-81d03fd85244","Type":"ContainerStarted","Data":"61ae194977614152b39fbf549b14c2b6ba4e9a1ad2475153a43fb5b2aa76152b"} Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.581766 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4" podUID="8d440653-f1c3-483c-a37d-463dcfc15224" Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.585439 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" event={"ID":"619dff06-7255-4aab-9ffe-9f2561bcc904","Type":"ContainerStarted","Data":"db102e99dacd7e0f310a45a6a04234e31237ab1cbcd6c93742ded6628d3cb9d1"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.587137 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" event={"ID":"7545763d-d2d2-4b6e-980d-737062f0a894","Type":"ContainerStarted","Data":"a45f38838029fc27dc6caa1da4ec104a117a6feb671d57bac438346127d9e56f"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.590493 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z" event={"ID":"35555f68-d5c4-44b2-9dfa-af5f91f57c7c","Type":"ContainerStarted","Data":"2e24a98d5e837ec1f9775546fc7401ede85f925da653cc504122bd2164829905"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.591453 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" event={"ID":"54959b79-361c-415a-986d-1af6d8eb6701","Type":"ContainerStarted","Data":"19f836c78094e1fa67ae7ef4cdc0cf8b6da9e5fc2e11e645a38476256764b5d3"} Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.591819 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7"] Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.594284 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" podUID="54959b79-361c-415a-986d-1af6d8eb6701" Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.639852 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:32 crc kubenswrapper[4699]: I0226 11:28:32.639939 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.640065 4699 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.640158 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs podName:ebf1a568-be30-4ceb-bc67-e3158a0280b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:34.640138999 +0000 UTC m=+1060.450965433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs") pod "openstack-operator-controller-manager-947f4f86b-m69sv" (UID: "ebf1a568-be30-4ceb-bc67-e3158a0280b9") : secret "webhook-server-cert" not found Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.641188 4699 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 11:28:32 crc kubenswrapper[4699]: E0226 11:28:32.641789 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs podName:ebf1a568-be30-4ceb-bc67-e3158a0280b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:34.641760695 +0000 UTC m=+1060.452587149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs") pod "openstack-operator-controller-manager-947f4f86b-m69sv" (UID: "ebf1a568-be30-4ceb-bc67-e3158a0280b9") : secret "metrics-server-cert" not found Feb 26 11:28:33 crc kubenswrapper[4699]: I0226 11:28:33.621141 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4" event={"ID":"8d440653-f1c3-483c-a37d-463dcfc15224","Type":"ContainerStarted","Data":"faa6deedd8d002c98bac7dc2db2f44b197dd5d6fac224340edf72a5d88594500"} Feb 26 11:28:33 crc kubenswrapper[4699]: I0226 11:28:33.625004 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" event={"ID":"a2b3bf3b-a815-4033-983b-eedc16b8609f","Type":"ContainerStarted","Data":"de1a251e348c24559c54fd374b8e5b8730720185642bbad9e6bd93882f1d1e59"} Feb 26 11:28:33 crc kubenswrapper[4699]: E0226 11:28:33.625646 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4" podUID="8d440653-f1c3-483c-a37d-463dcfc15224" Feb 26 11:28:33 crc kubenswrapper[4699]: E0226 11:28:33.636278 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" podUID="5be0c14a-e51f-4b69-ab58-c0cac66910e2" Feb 26 11:28:33 crc kubenswrapper[4699]: E0226 11:28:33.636716 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" podUID="38eef260-c32f-4568-9936-6197ba984f05" Feb 26 11:28:33 crc kubenswrapper[4699]: E0226 11:28:33.638662 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" podUID="33fc0a61-18c9-4e80-b898-92a5b1b71dac" Feb 26 11:28:33 crc kubenswrapper[4699]: E0226 11:28:33.639977 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" podUID="54959b79-361c-415a-986d-1af6d8eb6701" Feb 26 11:28:33 crc kubenswrapper[4699]: E0226 11:28:33.640764 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" podUID="caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2" Feb 26 11:28:33 crc kubenswrapper[4699]: I0226 11:28:33.763191 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert\") pod \"infra-operator-controller-manager-79d975b745-mtrs6\" (UID: \"afbeb2d8-c332-447b-a931-9fe7b246914d\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:33 crc kubenswrapper[4699]: E0226 11:28:33.763504 4699 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 11:28:33 crc kubenswrapper[4699]: E0226 11:28:33.763563 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert podName:afbeb2d8-c332-447b-a931-9fe7b246914d nodeName:}" failed. No retries permitted until 2026-02-26 11:28:37.763543498 +0000 UTC m=+1063.574369932 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert") pod "infra-operator-controller-manager-79d975b745-mtrs6" (UID: "afbeb2d8-c332-447b-a931-9fe7b246914d") : secret "infra-operator-webhook-server-cert" not found Feb 26 11:28:34 crc kubenswrapper[4699]: I0226 11:28:34.373791 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb\" (UID: \"ce7c40ca-05ad-49ca-a091-02ac588c3eb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:34 crc kubenswrapper[4699]: E0226 11:28:34.374853 4699 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:28:34 crc kubenswrapper[4699]: E0226 11:28:34.375191 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert podName:ce7c40ca-05ad-49ca-a091-02ac588c3eb7 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:38.375170288 +0000 UTC m=+1064.185996782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" (UID: "ce7c40ca-05ad-49ca-a091-02ac588c3eb7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:28:34 crc kubenswrapper[4699]: E0226 11:28:34.635610 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4" podUID="8d440653-f1c3-483c-a37d-463dcfc15224" Feb 26 11:28:34 crc kubenswrapper[4699]: I0226 11:28:34.679068 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:34 crc kubenswrapper[4699]: I0226 11:28:34.679157 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:34 crc kubenswrapper[4699]: E0226 11:28:34.679357 4699 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 11:28:34 crc kubenswrapper[4699]: E0226 11:28:34.679447 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs podName:ebf1a568-be30-4ceb-bc67-e3158a0280b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:38.67943186 +0000 UTC m=+1064.490258294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs") pod "openstack-operator-controller-manager-947f4f86b-m69sv" (UID: "ebf1a568-be30-4ceb-bc67-e3158a0280b9") : secret "metrics-server-cert" not found Feb 26 11:28:34 crc kubenswrapper[4699]: E0226 11:28:34.679462 4699 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 11:28:34 crc kubenswrapper[4699]: E0226 11:28:34.679540 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs podName:ebf1a568-be30-4ceb-bc67-e3158a0280b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:38.679519162 +0000 UTC m=+1064.490345626 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs") pod "openstack-operator-controller-manager-947f4f86b-m69sv" (UID: "ebf1a568-be30-4ceb-bc67-e3158a0280b9") : secret "webhook-server-cert" not found Feb 26 11:28:37 crc kubenswrapper[4699]: I0226 11:28:37.842485 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert\") pod \"infra-operator-controller-manager-79d975b745-mtrs6\" (UID: \"afbeb2d8-c332-447b-a931-9fe7b246914d\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:37 crc kubenswrapper[4699]: E0226 11:28:37.843023 4699 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 11:28:37 crc kubenswrapper[4699]: E0226 11:28:37.843084 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert podName:afbeb2d8-c332-447b-a931-9fe7b246914d nodeName:}" failed. No retries permitted until 2026-02-26 11:28:45.843063225 +0000 UTC m=+1071.653889659 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert") pod "infra-operator-controller-manager-79d975b745-mtrs6" (UID: "afbeb2d8-c332-447b-a931-9fe7b246914d") : secret "infra-operator-webhook-server-cert" not found Feb 26 11:28:38 crc kubenswrapper[4699]: I0226 11:28:38.451667 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb\" (UID: \"ce7c40ca-05ad-49ca-a091-02ac588c3eb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:38 crc kubenswrapper[4699]: E0226 11:28:38.451877 4699 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:28:38 crc kubenswrapper[4699]: E0226 11:28:38.451958 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert podName:ce7c40ca-05ad-49ca-a091-02ac588c3eb7 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:46.451935908 +0000 UTC m=+1072.262762342 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" (UID: "ce7c40ca-05ad-49ca-a091-02ac588c3eb7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:28:38 crc kubenswrapper[4699]: I0226 11:28:38.756971 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:38 crc kubenswrapper[4699]: I0226 11:28:38.757043 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:38 crc kubenswrapper[4699]: E0226 11:28:38.757210 4699 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 11:28:38 crc kubenswrapper[4699]: E0226 11:28:38.757245 4699 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 11:28:38 crc kubenswrapper[4699]: E0226 11:28:38.757258 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs podName:ebf1a568-be30-4ceb-bc67-e3158a0280b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:46.757244368 +0000 UTC m=+1072.568070792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs") pod "openstack-operator-controller-manager-947f4f86b-m69sv" (UID: "ebf1a568-be30-4ceb-bc67-e3158a0280b9") : secret "metrics-server-cert" not found Feb 26 11:28:38 crc kubenswrapper[4699]: E0226 11:28:38.757342 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs podName:ebf1a568-be30-4ceb-bc67-e3158a0280b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:28:46.75732391 +0000 UTC m=+1072.568150344 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs") pod "openstack-operator-controller-manager-947f4f86b-m69sv" (UID: "ebf1a568-be30-4ceb-bc67-e3158a0280b9") : secret "webhook-server-cert" not found Feb 26 11:28:45 crc kubenswrapper[4699]: I0226 11:28:45.890211 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert\") pod \"infra-operator-controller-manager-79d975b745-mtrs6\" (UID: \"afbeb2d8-c332-447b-a931-9fe7b246914d\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:45 crc kubenswrapper[4699]: I0226 11:28:45.901895 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afbeb2d8-c332-447b-a931-9fe7b246914d-cert\") pod \"infra-operator-controller-manager-79d975b745-mtrs6\" (UID: \"afbeb2d8-c332-447b-a931-9fe7b246914d\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:45 crc kubenswrapper[4699]: I0226 11:28:45.917761 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:28:46 crc kubenswrapper[4699]: I0226 11:28:46.499798 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb\" (UID: \"ce7c40ca-05ad-49ca-a091-02ac588c3eb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:46 crc kubenswrapper[4699]: I0226 11:28:46.509382 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7c40ca-05ad-49ca-a091-02ac588c3eb7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb\" (UID: \"ce7c40ca-05ad-49ca-a091-02ac588c3eb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:46 crc kubenswrapper[4699]: I0226 11:28:46.628921 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:28:46 crc kubenswrapper[4699]: I0226 11:28:46.803388 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:46 crc kubenswrapper[4699]: I0226 11:28:46.803536 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:46 crc kubenswrapper[4699]: I0226 11:28:46.811862 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-metrics-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:46 crc kubenswrapper[4699]: I0226 11:28:46.811972 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ebf1a568-be30-4ceb-bc67-e3158a0280b9-webhook-certs\") pod \"openstack-operator-controller-manager-947f4f86b-m69sv\" (UID: \"ebf1a568-be30-4ceb-bc67-e3158a0280b9\") " pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:46 crc kubenswrapper[4699]: I0226 11:28:46.986187 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:47 crc kubenswrapper[4699]: E0226 11:28:47.838426 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 26 11:28:47 crc kubenswrapper[4699]: E0226 11:28:47.838662 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrm4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-qf9vd_openstack-operators(619dff06-7255-4aab-9ffe-9f2561bcc904): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:47 crc kubenswrapper[4699]: E0226 11:28:47.841143 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" podUID="619dff06-7255-4aab-9ffe-9f2561bcc904" Feb 26 11:28:48 crc kubenswrapper[4699]: E0226 11:28:48.516586 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06" Feb 26 11:28:48 crc kubenswrapper[4699]: E0226 11:28:48.517086 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97rqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-659dc6bbfc-2wj2n_openstack-operators(a6e7ca85-e18b-4605-9180-316f65b82006): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:48 crc kubenswrapper[4699]: E0226 11:28:48.518285 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" podUID="a6e7ca85-e18b-4605-9180-316f65b82006" Feb 26 11:28:48 crc kubenswrapper[4699]: E0226 11:28:48.721803 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" podUID="619dff06-7255-4aab-9ffe-9f2561bcc904" Feb 26 11:28:48 crc kubenswrapper[4699]: E0226 11:28:48.722054 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" podUID="a6e7ca85-e18b-4605-9180-316f65b82006" Feb 26 11:28:49 crc kubenswrapper[4699]: E0226 11:28:49.094179 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" Feb 26 11:28:49 crc kubenswrapper[4699]: E0226 11:28:49.095292 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkj4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-fnnc7_openstack-operators(a2b3bf3b-a815-4033-983b-eedc16b8609f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:49 crc kubenswrapper[4699]: E0226 11:28:49.097161 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" podUID="a2b3bf3b-a815-4033-983b-eedc16b8609f" Feb 26 11:28:49 crc kubenswrapper[4699]: E0226 11:28:49.728220 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" podUID="a2b3bf3b-a815-4033-983b-eedc16b8609f" Feb 26 11:28:49 crc kubenswrapper[4699]: E0226 11:28:49.965004 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 26 11:28:49 crc kubenswrapper[4699]: E0226 11:28:49.965562 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vmk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-4mghs_openstack-operators(0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:49 crc kubenswrapper[4699]: E0226 11:28:49.967101 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" podUID="0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee" Feb 26 11:28:50 crc kubenswrapper[4699]: E0226 11:28:50.483931 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 26 11:28:50 crc kubenswrapper[4699]: E0226 11:28:50.484599 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-28f6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-jxr77_openstack-operators(7545763d-d2d2-4b6e-980d-737062f0a894): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:50 crc kubenswrapper[4699]: E0226 11:28:50.488301 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" podUID="7545763d-d2d2-4b6e-980d-737062f0a894" Feb 26 11:28:50 crc kubenswrapper[4699]: E0226 11:28:50.735561 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" podUID="0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee" Feb 26 11:28:50 crc kubenswrapper[4699]: E0226 11:28:50.735576 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" podUID="7545763d-d2d2-4b6e-980d-737062f0a894" Feb 26 11:28:50 crc kubenswrapper[4699]: I0226 11:28:50.930160 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv"] Feb 26 11:28:51 crc kubenswrapper[4699]: E0226 11:28:51.266815 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 26 11:28:51 crc kubenswrapper[4699]: E0226 11:28:51.267016 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mb6zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-d2pxc_openstack-operators(a2c419ab-2a99-4d37-b46c-b84024f24b2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:51 crc kubenswrapper[4699]: E0226 11:28:51.270215 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" podUID="a2c419ab-2a99-4d37-b46c-b84024f24b2e" Feb 26 11:28:51 crc kubenswrapper[4699]: W0226 11:28:51.271729 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebf1a568_be30_4ceb_bc67_e3158a0280b9.slice/crio-ea200e71db244b9336eb7a08d6d473be11b178679db63a369d1ad3c770199d02 WatchSource:0}: Error finding container ea200e71db244b9336eb7a08d6d473be11b178679db63a369d1ad3c770199d02: Status 404 returned error can't find the container with id ea200e71db244b9336eb7a08d6d473be11b178679db63a369d1ad3c770199d02 Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.704565 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb"] Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.753296 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" event={"ID":"ebf1a568-be30-4ceb-bc67-e3158a0280b9","Type":"ContainerStarted","Data":"ea200e71db244b9336eb7a08d6d473be11b178679db63a369d1ad3c770199d02"} Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.764375 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm" event={"ID":"07c2552c-8182-4cfe-a397-39ad287029e5","Type":"ContainerStarted","Data":"6c11046fbd5bea3301881e9a1c591e718bc6ddf795eb2e2f617c6ab877a9b08f"} Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.766610 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm" Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.775731 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" event={"ID":"ce7c40ca-05ad-49ca-a091-02ac588c3eb7","Type":"ContainerStarted","Data":"5bec5464a696581ba804d2a53c257cae642531445e768a7a0f2319d83e69a268"} Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.781523 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9" event={"ID":"1814471e-5f82-4464-9528-75da66d7235b","Type":"ContainerStarted","Data":"fa74f8b865e2119b7b535fbc125f5f43932d2aeac97440dfebd4b5039419ec0f"} Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.781603 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9" Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.787605 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz" Feb 26 11:28:51 crc kubenswrapper[4699]: E0226 11:28:51.791398 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" podUID="a2c419ab-2a99-4d37-b46c-b84024f24b2e" Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.813797 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6"] Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.817417 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm" podStartSLOduration=4.389839014 podStartE2EDuration="22.817393715s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.04729574 +0000 UTC m=+1057.858122174" lastFinishedPulling="2026-02-26 11:28:50.474850441 +0000 UTC m=+1076.285676875" observedRunningTime="2026-02-26 11:28:51.805873659 +0000 UTC m=+1077.616700083" watchObservedRunningTime="2026-02-26 11:28:51.817393715 +0000 UTC m=+1077.628220149" Feb 26 11:28:51 crc kubenswrapper[4699]: W0226 11:28:51.824307 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafbeb2d8_c332_447b_a931_9fe7b246914d.slice/crio-57d928a069c0f4a62bd37bdb55b7db4621d8b61ab8483ee5844ea927364f139f WatchSource:0}: Error finding container 57d928a069c0f4a62bd37bdb55b7db4621d8b61ab8483ee5844ea927364f139f: Status 404 returned error can't find the container with id 57d928a069c0f4a62bd37bdb55b7db4621d8b61ab8483ee5844ea927364f139f Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.889653 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9" podStartSLOduration=3.325654838 podStartE2EDuration="22.889635527s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:31.680191251 +0000 UTC m=+1057.491017685" lastFinishedPulling="2026-02-26 11:28:51.24417194 +0000 UTC m=+1077.054998374" observedRunningTime="2026-02-26 11:28:51.881533348 +0000 UTC m=+1077.692359782" watchObservedRunningTime="2026-02-26 11:28:51.889635527 +0000 UTC m=+1077.700461961" Feb 26 11:28:51 crc kubenswrapper[4699]: I0226 11:28:51.916366 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz" podStartSLOduration=4.272735453 podStartE2EDuration="22.916348852s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:31.830276545 +0000 UTC m=+1057.641102979" lastFinishedPulling="2026-02-26 11:28:50.473889944 +0000 UTC m=+1076.284716378" observedRunningTime="2026-02-26 11:28:51.913753179 +0000 UTC m=+1077.724579633" watchObservedRunningTime="2026-02-26 11:28:51.916348852 +0000 UTC m=+1077.727175286" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.797984 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-96png" event={"ID":"a90c4025-7bd1-401b-8f92-5f15a58fb3d6","Type":"ContainerStarted","Data":"66e80b6ecdd81f6a591e87b879141a96e4ef50753f60557357b793024413feb1"} Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.798403 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-96png" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.799592 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p" event={"ID":"d56efcbf-3414-4bd1-9cbf-d56c434ac529","Type":"ContainerStarted","Data":"272f949db65f166b94d5c631bee16c7e1f418a1af6aaa2732c89b04a26218d51"} Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.799788 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.802013 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" event={"ID":"ebf1a568-be30-4ceb-bc67-e3158a0280b9","Type":"ContainerStarted","Data":"94eac119d488dc7fd77ada5781bc981fa79d0eccb15b772532326268990baf17"} Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.802181 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.804336 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z" event={"ID":"35555f68-d5c4-44b2-9dfa-af5f91f57c7c","Type":"ContainerStarted","Data":"29b9415426dd1cf00735e4ad95da2d37939b5adddbfff72c69ca7d2285781200"} Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.804569 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.806721 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz" event={"ID":"27e251bb-8f9b-48d4-9ea3-81d03fd85244","Type":"ContainerStarted","Data":"fa8d386511243b5c1f8ec81d6765dfe78e6bf9dceecc149bdaa7b5032edf7d43"} Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.808265 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5" event={"ID":"15255a9b-0767-4518-8e81-ca9044f9190a","Type":"ContainerStarted","Data":"bbc67fb061f1462c637b2cb3a2f13c7b36d78c90e5193e1aa51941ff4adb6697"} Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.808386 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.811691 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" event={"ID":"afbeb2d8-c332-447b-a931-9fe7b246914d","Type":"ContainerStarted","Data":"57d928a069c0f4a62bd37bdb55b7db4621d8b61ab8483ee5844ea927364f139f"} Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.823940 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-96png" podStartSLOduration=4.478343606 podStartE2EDuration="22.823900689s" podCreationTimestamp="2026-02-26 11:28:30 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.128594818 +0000 UTC m=+1057.939421252" lastFinishedPulling="2026-02-26 11:28:50.474151901 +0000 UTC m=+1076.284978335" observedRunningTime="2026-02-26 11:28:52.813396922 +0000 UTC m=+1078.624223366" watchObservedRunningTime="2026-02-26 11:28:52.823900689 +0000 UTC m=+1078.634727123" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.833548 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f" event={"ID":"7b204025-d5ff-4c74-96b9-6774b62e0cc4","Type":"ContainerStarted","Data":"7636ab711872fcf872fecf3291e8b2b61bdc994423a76d8dcc516609dbe02d72"} Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.834457 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.868294 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" podStartSLOduration=22.868271413 podStartE2EDuration="22.868271413s" podCreationTimestamp="2026-02-26 11:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:28:52.862639954 +0000 UTC m=+1078.673466388" watchObservedRunningTime="2026-02-26 11:28:52.868271413 +0000 UTC m=+1078.679097847" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.895306 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5" podStartSLOduration=3.991846613 podStartE2EDuration="22.895289447s" podCreationTimestamp="2026-02-26 11:28:30 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.34053249 +0000 UTC m=+1058.151358924" lastFinishedPulling="2026-02-26 11:28:51.243975324 +0000 UTC m=+1077.054801758" observedRunningTime="2026-02-26 11:28:52.891784428 +0000 UTC m=+1078.702610872" watchObservedRunningTime="2026-02-26 11:28:52.895289447 +0000 UTC m=+1078.706115871" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.908568 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z" podStartSLOduration=5.420544312 podStartE2EDuration="23.908550252s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:31.986023498 +0000 UTC m=+1057.796849942" lastFinishedPulling="2026-02-26 11:28:50.474029448 +0000 UTC m=+1076.284855882" observedRunningTime="2026-02-26 11:28:52.905588048 +0000 UTC m=+1078.716414482" watchObservedRunningTime="2026-02-26 11:28:52.908550252 +0000 UTC m=+1078.719376686" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.940202 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p" podStartSLOduration=5.249730252 podStartE2EDuration="23.940188366s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:31.783602925 +0000 UTC m=+1057.594429359" lastFinishedPulling="2026-02-26 11:28:50.474061039 +0000 UTC m=+1076.284887473" observedRunningTime="2026-02-26 11:28:52.939313371 +0000 UTC m=+1078.750139815" watchObservedRunningTime="2026-02-26 11:28:52.940188366 +0000 UTC m=+1078.751014800" Feb 26 11:28:52 crc kubenswrapper[4699]: I0226 11:28:52.971851 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f" podStartSLOduration=5.149085247 podStartE2EDuration="23.971831681s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:31.651255703 +0000 UTC m=+1057.462082137" lastFinishedPulling="2026-02-26 11:28:50.474002137 +0000 UTC m=+1076.284828571" observedRunningTime="2026-02-26 11:28:52.965616445 +0000 UTC m=+1078.776442889" watchObservedRunningTime="2026-02-26 11:28:52.971831681 +0000 UTC m=+1078.782658125" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.652238 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lqqfb"] Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.654731 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.685634 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqqfb"] Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.755990 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-utilities\") pod \"redhat-marketplace-lqqfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.756060 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dzn\" (UniqueName: \"kubernetes.io/projected/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-kube-api-access-p7dzn\") pod \"redhat-marketplace-lqqfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.756187 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-catalog-content\") pod \"redhat-marketplace-lqqfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.857229 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-utilities\") pod \"redhat-marketplace-lqqfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.857297 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7dzn\" (UniqueName: \"kubernetes.io/projected/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-kube-api-access-p7dzn\") pod \"redhat-marketplace-lqqfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.857330 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-catalog-content\") pod \"redhat-marketplace-lqqfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.857856 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-catalog-content\") pod \"redhat-marketplace-lqqfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.859868 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-utilities\") pod \"redhat-marketplace-lqqfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.879912 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7dzn\" (UniqueName: \"kubernetes.io/projected/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-kube-api-access-p7dzn\") pod \"redhat-marketplace-lqqfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.976566 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:28:56 crc kubenswrapper[4699]: I0226 11:28:56.992483 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-947f4f86b-m69sv" Feb 26 11:29:00 crc kubenswrapper[4699]: I0226 11:29:00.116107 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-sndb9" Feb 26 11:29:00 crc kubenswrapper[4699]: I0226 11:29:00.130708 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-xw85z" Feb 26 11:29:00 crc kubenswrapper[4699]: I0226 11:29:00.147180 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-4k4sm" Feb 26 11:29:00 crc kubenswrapper[4699]: I0226 11:29:00.168377 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-jh7vz" Feb 26 11:29:00 crc kubenswrapper[4699]: I0226 11:29:00.210703 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t8c9f" Feb 26 11:29:00 crc kubenswrapper[4699]: I0226 11:29:00.407033 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5k85p" Feb 26 11:29:00 crc kubenswrapper[4699]: I0226 11:29:00.873019 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-96png" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.276471 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-f9kz5" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.842222 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqqfb"] Feb 26 11:29:01 crc kubenswrapper[4699]: W0226 11:29:01.850983 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcab2afa_9fb1_4d74_9a95_c2fe6a00bbfb.slice/crio-b20b95eb36039045cc49af319621d539e3867b2ec5748b29d5419b0ee942114d WatchSource:0}: Error finding container b20b95eb36039045cc49af319621d539e3867b2ec5748b29d5419b0ee942114d: Status 404 returned error can't find the container with id b20b95eb36039045cc49af319621d539e3867b2ec5748b29d5419b0ee942114d Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.911103 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" event={"ID":"5be0c14a-e51f-4b69-ab58-c0cac66910e2","Type":"ContainerStarted","Data":"4e182608610914b91c743dff33bf39a9d5d2c35ec6581b104adebb05d28d93c6"} Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.911348 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.913252 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" event={"ID":"afbeb2d8-c332-447b-a931-9fe7b246914d","Type":"ContainerStarted","Data":"d2b3b5996c5e1d357eb827e231563c1dfbc1e8a7644d2936251e261afdb389a1"} Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.913340 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.915719 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" event={"ID":"38eef260-c32f-4568-9936-6197ba984f05","Type":"ContainerStarted","Data":"2a103823842c06754087deab2ba925169b8a9452b423ea1bc09fc08779c4d9b9"} Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.915976 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.917232 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqqfb" event={"ID":"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb","Type":"ContainerStarted","Data":"b20b95eb36039045cc49af319621d539e3867b2ec5748b29d5419b0ee942114d"} Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.918679 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" event={"ID":"ce7c40ca-05ad-49ca-a091-02ac588c3eb7","Type":"ContainerStarted","Data":"a03d27af0446736421a168d8f1e337e63624375f40c20ac3cfd7ea03bfcaf4f2"} Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.918819 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.921565 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4" event={"ID":"8d440653-f1c3-483c-a37d-463dcfc15224","Type":"ContainerStarted","Data":"690dd1cfb71b1b2f2bea9fd378e31a5b15028476b9244ea65aeacb8ab832456a"} Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.923176 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" event={"ID":"54959b79-361c-415a-986d-1af6d8eb6701","Type":"ContainerStarted","Data":"f143a83a1fd6b9178dc8c6a6191c2dda227a8eecec4c3fc19e40f514d4533fb5"} Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.923416 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.925593 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" event={"ID":"caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2","Type":"ContainerStarted","Data":"8755619bb8a703a6c7e6ffa9eb407f4828e683d0e99c7314f1cfb9d655a92858"} Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.925765 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.927148 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" event={"ID":"619dff06-7255-4aab-9ffe-9f2561bcc904","Type":"ContainerStarted","Data":"a2c6867160870890433508410210e626be78f0792238e0ebf285803cf300b8a2"} Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.927343 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.927844 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" podStartSLOduration=2.850600538 podStartE2EDuration="31.927831633s" podCreationTimestamp="2026-02-26 11:28:30 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.364227089 +0000 UTC m=+1058.175053523" lastFinishedPulling="2026-02-26 11:29:01.441458184 +0000 UTC m=+1087.252284618" observedRunningTime="2026-02-26 11:29:01.925470567 +0000 UTC m=+1087.736297001" watchObservedRunningTime="2026-02-26 11:29:01.927831633 +0000 UTC m=+1087.738658057" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.932708 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" event={"ID":"33fc0a61-18c9-4e80-b898-92a5b1b71dac","Type":"ContainerStarted","Data":"05f89827978a53ff9302503378bea2956dd208c90c777008090f6b234581af52"} Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.933909 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.952478 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" podStartSLOduration=3.847185543 podStartE2EDuration="32.95245965s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.344397279 +0000 UTC m=+1058.155223713" lastFinishedPulling="2026-02-26 11:29:01.449671386 +0000 UTC m=+1087.260497820" observedRunningTime="2026-02-26 11:29:01.94646773 +0000 UTC m=+1087.757294164" watchObservedRunningTime="2026-02-26 11:29:01.95245965 +0000 UTC m=+1087.763286084" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.990048 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" podStartSLOduration=3.913113376 podStartE2EDuration="32.990029402s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.355988246 +0000 UTC m=+1058.166814680" lastFinishedPulling="2026-02-26 11:29:01.432904272 +0000 UTC m=+1087.243730706" observedRunningTime="2026-02-26 11:29:01.973887365 +0000 UTC m=+1087.784713799" watchObservedRunningTime="2026-02-26 11:29:01.990029402 +0000 UTC m=+1087.800855826" Feb 26 11:29:01 crc kubenswrapper[4699]: I0226 11:29:01.997035 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" podStartSLOduration=3.913423105 podStartE2EDuration="32.997013659s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.364064995 +0000 UTC m=+1058.174891429" lastFinishedPulling="2026-02-26 11:29:01.447655549 +0000 UTC m=+1087.258481983" observedRunningTime="2026-02-26 11:29:01.995685342 +0000 UTC m=+1087.806511786" watchObservedRunningTime="2026-02-26 11:29:01.997013659 +0000 UTC m=+1087.807840103" Feb 26 11:29:02 crc kubenswrapper[4699]: I0226 11:29:02.041134 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" podStartSLOduration=22.338810256 podStartE2EDuration="32.041105636s" podCreationTimestamp="2026-02-26 11:28:30 +0000 UTC" firstStartedPulling="2026-02-26 11:28:51.730058906 +0000 UTC m=+1077.540885340" lastFinishedPulling="2026-02-26 11:29:01.432354286 +0000 UTC m=+1087.243180720" observedRunningTime="2026-02-26 11:29:02.037638678 +0000 UTC m=+1087.848465112" watchObservedRunningTime="2026-02-26 11:29:02.041105636 +0000 UTC m=+1087.851932070" Feb 26 11:29:02 crc kubenswrapper[4699]: I0226 11:29:02.080482 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" podStartSLOduration=23.475557131 podStartE2EDuration="33.080465768s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:51.845445418 +0000 UTC m=+1077.656271852" lastFinishedPulling="2026-02-26 11:29:01.450354045 +0000 UTC m=+1087.261180489" observedRunningTime="2026-02-26 11:29:02.070480506 +0000 UTC m=+1087.881306940" watchObservedRunningTime="2026-02-26 11:29:02.080465768 +0000 UTC m=+1087.891292192" Feb 26 11:29:02 crc kubenswrapper[4699]: I0226 11:29:02.107280 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ghqf4" podStartSLOduration=3.255878647 podStartE2EDuration="32.107258006s" podCreationTimestamp="2026-02-26 11:28:30 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.580483963 +0000 UTC m=+1058.391310397" lastFinishedPulling="2026-02-26 11:29:01.431863332 +0000 UTC m=+1087.242689756" observedRunningTime="2026-02-26 11:29:02.102260964 +0000 UTC m=+1087.913087398" watchObservedRunningTime="2026-02-26 11:29:02.107258006 +0000 UTC m=+1087.918084440" Feb 26 11:29:02 crc kubenswrapper[4699]: I0226 11:29:02.128561 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" podStartSLOduration=3.961107473 podStartE2EDuration="33.128538597s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.282335564 +0000 UTC m=+1058.093161998" lastFinishedPulling="2026-02-26 11:29:01.449766688 +0000 UTC m=+1087.260593122" observedRunningTime="2026-02-26 11:29:02.121782786 +0000 UTC m=+1087.932609230" watchObservedRunningTime="2026-02-26 11:29:02.128538597 +0000 UTC m=+1087.939365041" Feb 26 11:29:02 crc kubenswrapper[4699]: I0226 11:29:02.286670 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" podStartSLOduration=3.212991624 podStartE2EDuration="32.286654117s" podCreationTimestamp="2026-02-26 11:28:30 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.357414387 +0000 UTC m=+1058.168240821" lastFinishedPulling="2026-02-26 11:29:01.43107688 +0000 UTC m=+1087.241903314" observedRunningTime="2026-02-26 11:29:02.150944001 +0000 UTC m=+1087.961770465" watchObservedRunningTime="2026-02-26 11:29:02.286654117 +0000 UTC m=+1088.097480551" Feb 26 11:29:02 crc kubenswrapper[4699]: I0226 11:29:02.951001 4699 generic.go:334] "Generic (PLEG): container finished" podID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" containerID="d12c24885bdbc88a04654df5aa4a62aabfb99830c195020e90a791e630ba32a8" exitCode=0 Feb 26 11:29:02 crc kubenswrapper[4699]: I0226 11:29:02.951080 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqqfb" event={"ID":"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb","Type":"ContainerDied","Data":"d12c24885bdbc88a04654df5aa4a62aabfb99830c195020e90a791e630ba32a8"} Feb 26 11:29:03 crc kubenswrapper[4699]: I0226 11:29:03.970092 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" event={"ID":"0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee","Type":"ContainerStarted","Data":"73ff91eebc7079c7bd4e4770f147253ab675f37cd319dd6fa63a66d37ecee78e"} Feb 26 11:29:03 crc kubenswrapper[4699]: I0226 11:29:03.971317 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" event={"ID":"a6e7ca85-e18b-4605-9180-316f65b82006","Type":"ContainerStarted","Data":"69e79410d562cd7879f89803e097fb5466c498aec15a2c4d05823e8ae9dea80d"} Feb 26 11:29:06 crc kubenswrapper[4699]: I0226 11:29:06.635886 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb" Feb 26 11:29:10 crc kubenswrapper[4699]: I0226 11:29:10.017650 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" Feb 26 11:29:10 crc kubenswrapper[4699]: I0226 11:29:10.022242 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" Feb 26 11:29:10 crc kubenswrapper[4699]: I0226 11:29:10.037994 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-2wj2n" podStartSLOduration=9.555790113 podStartE2EDuration="40.037974355s" podCreationTimestamp="2026-02-26 11:28:30 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.255727732 +0000 UTC m=+1058.066554166" lastFinishedPulling="2026-02-26 11:29:02.737911974 +0000 UTC m=+1088.548738408" observedRunningTime="2026-02-26 11:29:10.032518061 +0000 UTC m=+1095.843344505" watchObservedRunningTime="2026-02-26 11:29:10.037974355 +0000 UTC m=+1095.848800789" Feb 26 11:29:10 crc kubenswrapper[4699]: I0226 11:29:10.075671 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" podStartSLOduration=10.427117225 podStartE2EDuration="41.07564533s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.088091993 +0000 UTC m=+1057.898918427" lastFinishedPulling="2026-02-26 11:29:02.736620098 +0000 UTC m=+1088.547446532" observedRunningTime="2026-02-26 11:29:10.070232617 +0000 UTC m=+1095.881059051" watchObservedRunningTime="2026-02-26 11:29:10.07564533 +0000 UTC m=+1095.886471774" Feb 26 11:29:10 crc kubenswrapper[4699]: I0226 11:29:10.304830 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qf9vd" Feb 26 11:29:10 crc kubenswrapper[4699]: I0226 11:29:10.599829 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-9gwwj" Feb 26 11:29:10 crc kubenswrapper[4699]: I0226 11:29:10.722547 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-95whc" Feb 26 11:29:10 crc kubenswrapper[4699]: I0226 11:29:10.749050 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" Feb 26 11:29:10 crc kubenswrapper[4699]: I0226 11:29:10.752128 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-4mghs" Feb 26 11:29:10 crc kubenswrapper[4699]: I0226 11:29:10.792702 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-6gblm" Feb 26 11:29:11 crc kubenswrapper[4699]: I0226 11:29:11.226085 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bqvxr" Feb 26 11:29:11 crc kubenswrapper[4699]: I0226 11:29:11.248596 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-mwvnr" Feb 26 11:29:15 crc kubenswrapper[4699]: I0226 11:29:15.923598 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mtrs6" Feb 26 11:29:18 crc kubenswrapper[4699]: I0226 11:29:18.076680 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" event={"ID":"7545763d-d2d2-4b6e-980d-737062f0a894","Type":"ContainerStarted","Data":"21cf3c13d4147b90bbbc3e5c8c9143dd20a29710cc9f7a42b949171e4c3e971c"} Feb 26 11:29:18 crc kubenswrapper[4699]: I0226 11:29:18.077159 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" Feb 26 11:29:18 crc kubenswrapper[4699]: I0226 11:29:18.079126 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" event={"ID":"a2c419ab-2a99-4d37-b46c-b84024f24b2e","Type":"ContainerStarted","Data":"df6d762b81d467e3a35d1a84b9e8af797f9c64a53bf891618914fe1e9c831664"} Feb 26 11:29:18 crc kubenswrapper[4699]: I0226 11:29:18.079325 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" Feb 26 11:29:18 crc kubenswrapper[4699]: I0226 11:29:18.081214 4699 generic.go:334] "Generic (PLEG): container finished" podID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" containerID="4112987fd7819bd9acbd52ee0573be4741193978a977b5c5b5cb9d664c60cbaa" exitCode=0 Feb 26 11:29:18 crc kubenswrapper[4699]: I0226 11:29:18.081298 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqqfb" event={"ID":"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb","Type":"ContainerDied","Data":"4112987fd7819bd9acbd52ee0573be4741193978a977b5c5b5cb9d664c60cbaa"} Feb 26 11:29:18 crc kubenswrapper[4699]: I0226 11:29:18.083073 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" event={"ID":"a2b3bf3b-a815-4033-983b-eedc16b8609f","Type":"ContainerStarted","Data":"b8acb2396ea2c43e9c570cc00a83fec56bd1443bc07ec2e8c0c379585fc63883"} Feb 26 11:29:18 crc kubenswrapper[4699]: I0226 11:29:18.083276 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" Feb 26 11:29:18 crc kubenswrapper[4699]: I0226 11:29:18.097733 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" podStartSLOduration=3.186917505 podStartE2EDuration="48.097718241s" podCreationTimestamp="2026-02-26 11:28:30 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.340624432 +0000 UTC m=+1058.151450866" lastFinishedPulling="2026-02-26 11:29:17.251425128 +0000 UTC m=+1103.062251602" observedRunningTime="2026-02-26 11:29:18.094430238 +0000 UTC m=+1103.905256692" watchObservedRunningTime="2026-02-26 11:29:18.097718241 +0000 UTC m=+1103.908544665" Feb 26 11:29:18 crc kubenswrapper[4699]: I0226 11:29:18.111282 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" podStartSLOduration=3.430559793 podStartE2EDuration="48.111261624s" podCreationTimestamp="2026-02-26 11:28:30 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.572390244 +0000 UTC m=+1058.383216678" lastFinishedPulling="2026-02-26 11:29:17.253092075 +0000 UTC m=+1103.063918509" observedRunningTime="2026-02-26 11:29:18.110675327 +0000 UTC m=+1103.921501781" watchObservedRunningTime="2026-02-26 11:29:18.111261624 +0000 UTC m=+1103.922088058" Feb 26 11:29:18 crc kubenswrapper[4699]: I0226 11:29:18.127786 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" podStartSLOduration=4.205067859 podStartE2EDuration="49.127766721s" podCreationTimestamp="2026-02-26 11:28:29 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.328445388 +0000 UTC m=+1058.139271822" lastFinishedPulling="2026-02-26 11:29:17.25114425 +0000 UTC m=+1103.061970684" observedRunningTime="2026-02-26 11:29:18.126930237 +0000 UTC m=+1103.937756691" watchObservedRunningTime="2026-02-26 11:29:18.127766721 +0000 UTC m=+1103.938593165" Feb 26 11:29:19 crc kubenswrapper[4699]: I0226 11:29:19.092393 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqqfb" event={"ID":"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb","Type":"ContainerStarted","Data":"1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d"} Feb 26 11:29:19 crc kubenswrapper[4699]: I0226 11:29:19.109885 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lqqfb" podStartSLOduration=7.486145707 podStartE2EDuration="23.109867745s" podCreationTimestamp="2026-02-26 11:28:56 +0000 UTC" firstStartedPulling="2026-02-26 11:29:02.95529579 +0000 UTC m=+1088.766122224" lastFinishedPulling="2026-02-26 11:29:18.579017828 +0000 UTC m=+1104.389844262" observedRunningTime="2026-02-26 11:29:19.107614271 +0000 UTC m=+1104.918440705" watchObservedRunningTime="2026-02-26 11:29:19.109867745 +0000 UTC m=+1104.920694189" Feb 26 11:29:26 crc kubenswrapper[4699]: I0226 11:29:26.977349 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:29:26 crc kubenswrapper[4699]: I0226 11:29:26.978974 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:29:27 crc kubenswrapper[4699]: I0226 11:29:27.020571 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:29:27 crc kubenswrapper[4699]: I0226 11:29:27.223797 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:29:27 crc kubenswrapper[4699]: I0226 11:29:27.275689 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqqfb"] Feb 26 11:29:29 crc kubenswrapper[4699]: I0226 11:29:29.195849 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lqqfb" podUID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" containerName="registry-server" containerID="cri-o://1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d" gracePeriod=2 Feb 26 11:29:29 crc kubenswrapper[4699]: I0226 11:29:29.665870 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:29:29 crc kubenswrapper[4699]: I0226 11:29:29.767683 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-utilities\") pod \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " Feb 26 11:29:29 crc kubenswrapper[4699]: I0226 11:29:29.767752 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-catalog-content\") pod \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " Feb 26 11:29:29 crc kubenswrapper[4699]: I0226 11:29:29.767845 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7dzn\" (UniqueName: \"kubernetes.io/projected/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-kube-api-access-p7dzn\") pod \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\" (UID: \"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb\") " Feb 26 11:29:29 crc kubenswrapper[4699]: I0226 11:29:29.768800 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-utilities" (OuterVolumeSpecName: "utilities") pod "dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" (UID: "dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:29:29 crc kubenswrapper[4699]: I0226 11:29:29.777108 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-kube-api-access-p7dzn" (OuterVolumeSpecName: "kube-api-access-p7dzn") pod "dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" (UID: "dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb"). InnerVolumeSpecName "kube-api-access-p7dzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:29 crc kubenswrapper[4699]: I0226 11:29:29.790799 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" (UID: "dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:29:29 crc kubenswrapper[4699]: I0226 11:29:29.868793 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:29 crc kubenswrapper[4699]: I0226 11:29:29.868830 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:29 crc kubenswrapper[4699]: I0226 11:29:29.868842 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7dzn\" (UniqueName: \"kubernetes.io/projected/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb-kube-api-access-p7dzn\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.207975 4699 generic.go:334] "Generic (PLEG): container finished" podID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" containerID="1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d" exitCode=0 Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.208029 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqqfb" event={"ID":"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb","Type":"ContainerDied","Data":"1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d"} Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.208085 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqqfb" event={"ID":"dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb","Type":"ContainerDied","Data":"b20b95eb36039045cc49af319621d539e3867b2ec5748b29d5419b0ee942114d"} Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.208042 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqqfb" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.208104 4699 scope.go:117] "RemoveContainer" containerID="1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.228414 4699 scope.go:117] "RemoveContainer" containerID="4112987fd7819bd9acbd52ee0573be4741193978a977b5c5b5cb9d664c60cbaa" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.248719 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqqfb"] Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.254426 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqqfb"] Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.270726 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" path="/var/lib/kubelet/pods/dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb/volumes" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.271677 4699 scope.go:117] "RemoveContainer" containerID="d12c24885bdbc88a04654df5aa4a62aabfb99830c195020e90a791e630ba32a8" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.286589 4699 scope.go:117] "RemoveContainer" containerID="1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d" Feb 26 11:29:30 crc kubenswrapper[4699]: E0226 11:29:30.286973 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d\": container with ID starting with 1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d not found: ID does not exist" containerID="1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.287016 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d"} err="failed to get container status \"1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d\": rpc error: code = NotFound desc = could not find container \"1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d\": container with ID starting with 1f8302bbf90eb87cb30aa8a818dfa3d5a4eed62fee1821d469834548965e488d not found: ID does not exist" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.287044 4699 scope.go:117] "RemoveContainer" containerID="4112987fd7819bd9acbd52ee0573be4741193978a977b5c5b5cb9d664c60cbaa" Feb 26 11:29:30 crc kubenswrapper[4699]: E0226 11:29:30.287580 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4112987fd7819bd9acbd52ee0573be4741193978a977b5c5b5cb9d664c60cbaa\": container with ID starting with 4112987fd7819bd9acbd52ee0573be4741193978a977b5c5b5cb9d664c60cbaa not found: ID does not exist" containerID="4112987fd7819bd9acbd52ee0573be4741193978a977b5c5b5cb9d664c60cbaa" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.287612 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4112987fd7819bd9acbd52ee0573be4741193978a977b5c5b5cb9d664c60cbaa"} err="failed to get container status \"4112987fd7819bd9acbd52ee0573be4741193978a977b5c5b5cb9d664c60cbaa\": rpc error: code = NotFound desc = could not find container \"4112987fd7819bd9acbd52ee0573be4741193978a977b5c5b5cb9d664c60cbaa\": container with ID starting with 4112987fd7819bd9acbd52ee0573be4741193978a977b5c5b5cb9d664c60cbaa not found: ID does not exist" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.287635 4699 scope.go:117] "RemoveContainer" containerID="d12c24885bdbc88a04654df5aa4a62aabfb99830c195020e90a791e630ba32a8" Feb 26 11:29:30 crc kubenswrapper[4699]: E0226 11:29:30.287835 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d12c24885bdbc88a04654df5aa4a62aabfb99830c195020e90a791e630ba32a8\": container with ID starting with d12c24885bdbc88a04654df5aa4a62aabfb99830c195020e90a791e630ba32a8 not found: ID does not exist" containerID="d12c24885bdbc88a04654df5aa4a62aabfb99830c195020e90a791e630ba32a8" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.287869 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d12c24885bdbc88a04654df5aa4a62aabfb99830c195020e90a791e630ba32a8"} err="failed to get container status \"d12c24885bdbc88a04654df5aa4a62aabfb99830c195020e90a791e630ba32a8\": rpc error: code = NotFound desc = could not find container \"d12c24885bdbc88a04654df5aa4a62aabfb99830c195020e90a791e630ba32a8\": container with ID starting with d12c24885bdbc88a04654df5aa4a62aabfb99830c195020e90a791e630ba32a8 not found: ID does not exist" Feb 26 11:29:30 crc kubenswrapper[4699]: I0226 11:29:30.573441 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-d2pxc" Feb 26 11:29:31 crc kubenswrapper[4699]: I0226 11:29:31.203440 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jxr77" Feb 26 11:29:31 crc kubenswrapper[4699]: I0226 11:29:31.376251 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-fnnc7" Feb 26 11:29:47 crc kubenswrapper[4699]: I0226 11:29:47.915839 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn7l5"] Feb 26 11:29:47 crc kubenswrapper[4699]: E0226 11:29:47.917067 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" containerName="extract-content" Feb 26 11:29:47 crc kubenswrapper[4699]: I0226 11:29:47.917174 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" containerName="extract-content" Feb 26 11:29:47 crc kubenswrapper[4699]: E0226 11:29:47.917193 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" containerName="extract-utilities" Feb 26 11:29:47 crc kubenswrapper[4699]: I0226 11:29:47.917200 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" containerName="extract-utilities" Feb 26 11:29:47 crc kubenswrapper[4699]: E0226 11:29:47.917207 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" containerName="registry-server" Feb 26 11:29:47 crc kubenswrapper[4699]: I0226 11:29:47.917215 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" containerName="registry-server" Feb 26 11:29:47 crc kubenswrapper[4699]: I0226 11:29:47.917376 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcab2afa-9fb1-4d74-9a95-c2fe6a00bbfb" containerName="registry-server" Feb 26 11:29:47 crc kubenswrapper[4699]: I0226 11:29:47.918410 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" Feb 26 11:29:47 crc kubenswrapper[4699]: I0226 11:29:47.923280 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 26 11:29:47 crc kubenswrapper[4699]: I0226 11:29:47.923412 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-69gc7" Feb 26 11:29:47 crc kubenswrapper[4699]: I0226 11:29:47.923744 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 26 11:29:47 crc kubenswrapper[4699]: I0226 11:29:47.929590 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn7l5"] Feb 26 11:29:47 crc kubenswrapper[4699]: I0226 11:29:47.931108 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.000750 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-65vml"] Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.001957 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.006458 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.009560 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-65vml"] Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.122103 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-config\") pod \"dnsmasq-dns-78dd6ddcc-65vml\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.122236 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44038b95-eefd-44cd-9781-0a2273605e75-config\") pod \"dnsmasq-dns-675f4bcbfc-vn7l5\" (UID: \"44038b95-eefd-44cd-9781-0a2273605e75\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.122283 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4949\" (UniqueName: \"kubernetes.io/projected/3eb471e3-5e11-44a3-b3cd-176785c79d76-kube-api-access-h4949\") pod \"dnsmasq-dns-78dd6ddcc-65vml\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.122339 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5r5x\" (UniqueName: \"kubernetes.io/projected/44038b95-eefd-44cd-9781-0a2273605e75-kube-api-access-d5r5x\") pod \"dnsmasq-dns-675f4bcbfc-vn7l5\" (UID: \"44038b95-eefd-44cd-9781-0a2273605e75\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.122385 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-65vml\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.223538 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5r5x\" (UniqueName: \"kubernetes.io/projected/44038b95-eefd-44cd-9781-0a2273605e75-kube-api-access-d5r5x\") pod \"dnsmasq-dns-675f4bcbfc-vn7l5\" (UID: \"44038b95-eefd-44cd-9781-0a2273605e75\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.223601 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-65vml\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.223639 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-config\") pod \"dnsmasq-dns-78dd6ddcc-65vml\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.223673 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44038b95-eefd-44cd-9781-0a2273605e75-config\") pod \"dnsmasq-dns-675f4bcbfc-vn7l5\" (UID: \"44038b95-eefd-44cd-9781-0a2273605e75\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.223707 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4949\" (UniqueName: \"kubernetes.io/projected/3eb471e3-5e11-44a3-b3cd-176785c79d76-kube-api-access-h4949\") pod \"dnsmasq-dns-78dd6ddcc-65vml\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.224842 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-config\") pod \"dnsmasq-dns-78dd6ddcc-65vml\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.224868 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-65vml\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.224888 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44038b95-eefd-44cd-9781-0a2273605e75-config\") pod \"dnsmasq-dns-675f4bcbfc-vn7l5\" (UID: \"44038b95-eefd-44cd-9781-0a2273605e75\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.243678 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4949\" (UniqueName: \"kubernetes.io/projected/3eb471e3-5e11-44a3-b3cd-176785c79d76-kube-api-access-h4949\") pod \"dnsmasq-dns-78dd6ddcc-65vml\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.248975 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5r5x\" (UniqueName: \"kubernetes.io/projected/44038b95-eefd-44cd-9781-0a2273605e75-kube-api-access-d5r5x\") pod \"dnsmasq-dns-675f4bcbfc-vn7l5\" (UID: \"44038b95-eefd-44cd-9781-0a2273605e75\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.325022 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.537648 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.741469 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-65vml"] Feb 26 11:29:48 crc kubenswrapper[4699]: I0226 11:29:48.934881 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn7l5"] Feb 26 11:29:49 crc kubenswrapper[4699]: I0226 11:29:49.346620 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" event={"ID":"3eb471e3-5e11-44a3-b3cd-176785c79d76","Type":"ContainerStarted","Data":"718ea2b3fa019be493e1d3c84030139b5efcc52cadefdd0f6d61e3832e93141d"} Feb 26 11:29:49 crc kubenswrapper[4699]: I0226 11:29:49.349437 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" event={"ID":"44038b95-eefd-44cd-9781-0a2273605e75","Type":"ContainerStarted","Data":"02fc15d2715e55f8c8cd19bc42d6cb612f93305db1f6bde0aa9d00c273dd8d8c"} Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.506922 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn7l5"] Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.531540 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpxrq"] Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.533016 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.543894 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpxrq"] Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.667971 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vpxrq\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.668368 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khq7k\" (UniqueName: \"kubernetes.io/projected/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-kube-api-access-khq7k\") pod \"dnsmasq-dns-666b6646f7-vpxrq\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.668443 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-config\") pod \"dnsmasq-dns-666b6646f7-vpxrq\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.770836 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vpxrq\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.770936 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khq7k\" (UniqueName: \"kubernetes.io/projected/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-kube-api-access-khq7k\") pod \"dnsmasq-dns-666b6646f7-vpxrq\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.771009 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-config\") pod \"dnsmasq-dns-666b6646f7-vpxrq\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.772543 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-config\") pod \"dnsmasq-dns-666b6646f7-vpxrq\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.772572 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vpxrq\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.807655 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-65vml"] Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.808210 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khq7k\" (UniqueName: \"kubernetes.io/projected/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-kube-api-access-khq7k\") pod \"dnsmasq-dns-666b6646f7-vpxrq\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.833001 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mwnwv"] Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.834607 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.856371 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mwnwv"] Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.889473 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.973195 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mwnwv\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.973323 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-config\") pod \"dnsmasq-dns-57d769cc4f-mwnwv\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:29:50 crc kubenswrapper[4699]: I0226 11:29:50.973368 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx227\" (UniqueName: \"kubernetes.io/projected/13838b5f-5f0e-44ba-8b63-97b4e20efbce-kube-api-access-vx227\") pod \"dnsmasq-dns-57d769cc4f-mwnwv\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.074144 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mwnwv\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.074249 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-config\") pod \"dnsmasq-dns-57d769cc4f-mwnwv\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.074276 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx227\" (UniqueName: \"kubernetes.io/projected/13838b5f-5f0e-44ba-8b63-97b4e20efbce-kube-api-access-vx227\") pod \"dnsmasq-dns-57d769cc4f-mwnwv\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.075230 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-config\") pod \"dnsmasq-dns-57d769cc4f-mwnwv\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.075256 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mwnwv\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.096364 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx227\" (UniqueName: \"kubernetes.io/projected/13838b5f-5f0e-44ba-8b63-97b4e20efbce-kube-api-access-vx227\") pod \"dnsmasq-dns-57d769cc4f-mwnwv\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.166467 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.529629 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpxrq"] Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.639756 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mwnwv"] Feb 26 11:29:51 crc kubenswrapper[4699]: W0226 11:29:51.646715 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13838b5f_5f0e_44ba_8b63_97b4e20efbce.slice/crio-30da7116ef227de51d61b599067ff253e3fbcd27cb9bf2e3d4c83d06e5a7374a WatchSource:0}: Error finding container 30da7116ef227de51d61b599067ff253e3fbcd27cb9bf2e3d4c83d06e5a7374a: Status 404 returned error can't find the container with id 30da7116ef227de51d61b599067ff253e3fbcd27cb9bf2e3d4c83d06e5a7374a Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.699165 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.700758 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.705135 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.705209 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.705267 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-g9kcp" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.705265 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.705288 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.705343 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.706641 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.712437 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.890909 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f4a652b4-5b96-4ebf-81b4-df92846455bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.891418 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.891537 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.891633 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.891714 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42595\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-kube-api-access-42595\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.891809 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.891914 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.892016 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.892229 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.892346 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.892466 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f4a652b4-5b96-4ebf-81b4-df92846455bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.996660 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.996779 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f4a652b4-5b96-4ebf-81b4-df92846455bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.996840 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f4a652b4-5b96-4ebf-81b4-df92846455bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.996927 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.997893 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.997972 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.997433 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.998183 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.998244 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42595\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-kube-api-access-42595\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.998304 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.998329 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.998359 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.998444 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.999204 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.999845 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:51 crc kubenswrapper[4699]: I0226 11:29:51.999921 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.002688 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.003780 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.006447 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f4a652b4-5b96-4ebf-81b4-df92846455bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.013012 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f4a652b4-5b96-4ebf-81b4-df92846455bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.020947 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.021842 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.022780 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42595\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-kube-api-access-42595\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.033317 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.034706 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " pod="openstack/rabbitmq-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.036931 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.037238 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.037454 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.037706 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mp8r4" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.037880 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.038027 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.040520 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.044328 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.200896 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.200960 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.200983 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.201019 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.201170 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.201192 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d57084d-dc87-44e4-bbc8-50c402b7165b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.201222 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.201239 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d57084d-dc87-44e4-bbc8-50c402b7165b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.201257 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.201281 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.201385 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz7xp\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-kube-api-access-pz7xp\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.303761 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.304005 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.304029 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.304062 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.304098 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.306566 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d57084d-dc87-44e4-bbc8-50c402b7165b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.306651 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.306676 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d57084d-dc87-44e4-bbc8-50c402b7165b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.306706 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.306729 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.306764 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz7xp\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-kube-api-access-pz7xp\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.311605 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.312343 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.312991 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.314760 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.314921 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.315227 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d57084d-dc87-44e4-bbc8-50c402b7165b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.315462 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.315924 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.335503 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.335526 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz7xp\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-kube-api-access-pz7xp\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.335731 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d57084d-dc87-44e4-bbc8-50c402b7165b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.338450 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.350020 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.388318 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.390872 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" event={"ID":"9e16e518-0512-4df0-b8c7-1cd2f9c1e352","Type":"ContainerStarted","Data":"2aaa9042481814730657b40428f86e835d8db2be305a9194e255d49a0c3e4409"} Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.401217 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" event={"ID":"13838b5f-5f0e-44ba-8b63-97b4e20efbce","Type":"ContainerStarted","Data":"30da7116ef227de51d61b599067ff253e3fbcd27cb9bf2e3d4c83d06e5a7374a"} Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.846672 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:29:52 crc kubenswrapper[4699]: I0226 11:29:52.853238 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.383977 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.388672 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.392841 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.392927 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.392936 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.393158 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-m8zx4" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.420301 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.423940 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.623940 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6fdc6b6d-ac77-4179-9864-f220d622c0f4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.623997 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.624018 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fdc6b6d-ac77-4179-9864-f220d622c0f4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.624053 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxgtj\" (UniqueName: \"kubernetes.io/projected/6fdc6b6d-ac77-4179-9864-f220d622c0f4-kube-api-access-cxgtj\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.627250 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fdc6b6d-ac77-4179-9864-f220d622c0f4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.627357 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdc6b6d-ac77-4179-9864-f220d622c0f4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.631669 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fdc6b6d-ac77-4179-9864-f220d622c0f4-kolla-config\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.631768 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6fdc6b6d-ac77-4179-9864-f220d622c0f4-config-data-default\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.735464 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fdc6b6d-ac77-4179-9864-f220d622c0f4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.735523 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdc6b6d-ac77-4179-9864-f220d622c0f4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.735721 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fdc6b6d-ac77-4179-9864-f220d622c0f4-kolla-config\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.735761 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6fdc6b6d-ac77-4179-9864-f220d622c0f4-config-data-default\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.736285 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6fdc6b6d-ac77-4179-9864-f220d622c0f4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.736319 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.736336 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fdc6b6d-ac77-4179-9864-f220d622c0f4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.736368 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxgtj\" (UniqueName: \"kubernetes.io/projected/6fdc6b6d-ac77-4179-9864-f220d622c0f4-kube-api-access-cxgtj\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.736971 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6fdc6b6d-ac77-4179-9864-f220d622c0f4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.737244 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.737633 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6fdc6b6d-ac77-4179-9864-f220d622c0f4-config-data-default\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.738858 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fdc6b6d-ac77-4179-9864-f220d622c0f4-kolla-config\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.739290 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fdc6b6d-ac77-4179-9864-f220d622c0f4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.749568 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fdc6b6d-ac77-4179-9864-f220d622c0f4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.753827 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdc6b6d-ac77-4179-9864-f220d622c0f4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.758139 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxgtj\" (UniqueName: \"kubernetes.io/projected/6fdc6b6d-ac77-4179-9864-f220d622c0f4-kube-api-access-cxgtj\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.767356 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"6fdc6b6d-ac77-4179-9864-f220d622c0f4\") " pod="openstack/openstack-galera-0" Feb 26 11:29:53 crc kubenswrapper[4699]: I0226 11:29:53.982013 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.780460 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.781674 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.786730 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.786936 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.787096 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.787893 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hwww4" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.805184 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.850360 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.851547 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.857566 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.858165 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-znbkh" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.858413 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.868084 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.959277 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5flh5\" (UniqueName: \"kubernetes.io/projected/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-kube-api-access-5flh5\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.959460 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.959497 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.959600 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.959743 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.959802 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-kolla-config\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.959823 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.959919 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-config-data\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.959990 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.960033 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.960060 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.960081 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4ntn\" (UniqueName: \"kubernetes.io/projected/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-kube-api-access-f4ntn\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:54 crc kubenswrapper[4699]: I0226 11:29:54.960130 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061294 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061343 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061361 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4ntn\" (UniqueName: \"kubernetes.io/projected/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-kube-api-access-f4ntn\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061393 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061419 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5flh5\" (UniqueName: \"kubernetes.io/projected/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-kube-api-access-5flh5\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061463 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061480 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061507 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061535 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061553 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061568 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-kolla-config\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061597 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-config-data\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.061615 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.062534 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.062757 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.062861 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.063423 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-config-data\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.063850 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.064136 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-kolla-config\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.064694 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.065964 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.067067 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.067358 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.077213 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.081766 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4ntn\" (UniqueName: \"kubernetes.io/projected/edce8e75-6dd5-4fbd-8f76-bc6553cc27b9-kube-api-access-f4ntn\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.083526 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5flh5\" (UniqueName: \"kubernetes.io/projected/6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2-kube-api-access-5flh5\") pod \"memcached-0\" (UID: \"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2\") " pod="openstack/memcached-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.084231 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.122793 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 11:29:55 crc kubenswrapper[4699]: I0226 11:29:55.181803 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 11:29:57 crc kubenswrapper[4699]: I0226 11:29:57.385750 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:29:57 crc kubenswrapper[4699]: I0226 11:29:57.387077 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:29:57 crc kubenswrapper[4699]: I0226 11:29:57.388843 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-f8vrt" Feb 26 11:29:57 crc kubenswrapper[4699]: I0226 11:29:57.396660 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:29:57 crc kubenswrapper[4699]: I0226 11:29:57.506694 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vnz\" (UniqueName: \"kubernetes.io/projected/2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf-kube-api-access-r4vnz\") pod \"kube-state-metrics-0\" (UID: \"2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf\") " pod="openstack/kube-state-metrics-0" Feb 26 11:29:57 crc kubenswrapper[4699]: I0226 11:29:57.608372 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4vnz\" (UniqueName: \"kubernetes.io/projected/2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf-kube-api-access-r4vnz\") pod \"kube-state-metrics-0\" (UID: \"2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf\") " pod="openstack/kube-state-metrics-0" Feb 26 11:29:57 crc kubenswrapper[4699]: I0226 11:29:57.631678 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4vnz\" (UniqueName: \"kubernetes.io/projected/2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf-kube-api-access-r4vnz\") pod \"kube-state-metrics-0\" (UID: \"2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf\") " pod="openstack/kube-state-metrics-0" Feb 26 11:29:57 crc kubenswrapper[4699]: I0226 11:29:57.702337 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:29:58 crc kubenswrapper[4699]: W0226 11:29:58.581315 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d57084d_dc87_44e4_bbc8_50c402b7165b.slice/crio-6c8df8aa27d02e0ceb8002bd8f20b8b521706c7be8fe88b152c705914906b7ae WatchSource:0}: Error finding container 6c8df8aa27d02e0ceb8002bd8f20b8b521706c7be8fe88b152c705914906b7ae: Status 404 returned error can't find the container with id 6c8df8aa27d02e0ceb8002bd8f20b8b521706c7be8fe88b152c705914906b7ae Feb 26 11:29:59 crc kubenswrapper[4699]: I0226 11:29:59.480029 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d57084d-dc87-44e4-bbc8-50c402b7165b","Type":"ContainerStarted","Data":"6c8df8aa27d02e0ceb8002bd8f20b8b521706c7be8fe88b152c705914906b7ae"} Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.139501 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535090-7v44h"] Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.140824 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535090-7v44h" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.148005 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.148219 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.148341 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.159697 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj"] Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.160888 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.163988 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.164162 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.192662 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535090-7v44h"] Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.203818 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj"] Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.209468 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nrvng"] Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.210668 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.219480 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nrvng"] Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.222094 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hplxc" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.222271 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.222518 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.256492 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-gxnxl"] Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.258770 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.259022 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvx8x\" (UniqueName: \"kubernetes.io/projected/9b298a96-eca9-49eb-a547-f88e986f326e-kube-api-access-vvx8x\") pod \"collect-profiles-29535090-n42nj\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.259082 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b298a96-eca9-49eb-a547-f88e986f326e-config-volume\") pod \"collect-profiles-29535090-n42nj\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.259302 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmg6x\" (UniqueName: \"kubernetes.io/projected/a0d38a99-b56f-423c-9c5b-c8f726bf62f9-kube-api-access-jmg6x\") pod \"auto-csr-approver-29535090-7v44h\" (UID: \"a0d38a99-b56f-423c-9c5b-c8f726bf62f9\") " pod="openshift-infra/auto-csr-approver-29535090-7v44h" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.259346 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b298a96-eca9-49eb-a547-f88e986f326e-secret-volume\") pod \"collect-profiles-29535090-n42nj\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.292158 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gxnxl"] Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362235 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-scripts\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362354 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-var-run\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362385 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q5c2\" (UniqueName: \"kubernetes.io/projected/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-kube-api-access-5q5c2\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362415 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-var-lib\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362455 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqrz2\" (UniqueName: \"kubernetes.io/projected/cd4015f0-f1a7-40d7-ae69-089f74a6873d-kube-api-access-xqrz2\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362491 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd4015f0-f1a7-40d7-ae69-089f74a6873d-var-run\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362511 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4015f0-f1a7-40d7-ae69-089f74a6873d-combined-ca-bundle\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362547 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmg6x\" (UniqueName: \"kubernetes.io/projected/a0d38a99-b56f-423c-9c5b-c8f726bf62f9-kube-api-access-jmg6x\") pod \"auto-csr-approver-29535090-7v44h\" (UID: \"a0d38a99-b56f-423c-9c5b-c8f726bf62f9\") " pod="openshift-infra/auto-csr-approver-29535090-7v44h" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362584 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b298a96-eca9-49eb-a547-f88e986f326e-secret-volume\") pod \"collect-profiles-29535090-n42nj\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362643 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-var-log\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362673 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd4015f0-f1a7-40d7-ae69-089f74a6873d-var-log-ovn\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362694 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvx8x\" (UniqueName: \"kubernetes.io/projected/9b298a96-eca9-49eb-a547-f88e986f326e-kube-api-access-vvx8x\") pod \"collect-profiles-29535090-n42nj\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362717 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b298a96-eca9-49eb-a547-f88e986f326e-config-volume\") pod \"collect-profiles-29535090-n42nj\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362748 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4015f0-f1a7-40d7-ae69-089f74a6873d-ovn-controller-tls-certs\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362775 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-etc-ovs\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362795 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd4015f0-f1a7-40d7-ae69-089f74a6873d-scripts\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.362828 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd4015f0-f1a7-40d7-ae69-089f74a6873d-var-run-ovn\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.363912 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b298a96-eca9-49eb-a547-f88e986f326e-config-volume\") pod \"collect-profiles-29535090-n42nj\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.379209 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b298a96-eca9-49eb-a547-f88e986f326e-secret-volume\") pod \"collect-profiles-29535090-n42nj\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.381920 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmg6x\" (UniqueName: \"kubernetes.io/projected/a0d38a99-b56f-423c-9c5b-c8f726bf62f9-kube-api-access-jmg6x\") pod \"auto-csr-approver-29535090-7v44h\" (UID: \"a0d38a99-b56f-423c-9c5b-c8f726bf62f9\") " pod="openshift-infra/auto-csr-approver-29535090-7v44h" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.383317 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvx8x\" (UniqueName: \"kubernetes.io/projected/9b298a96-eca9-49eb-a547-f88e986f326e-kube-api-access-vvx8x\") pod \"collect-profiles-29535090-n42nj\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.464941 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4015f0-f1a7-40d7-ae69-089f74a6873d-ovn-controller-tls-certs\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465005 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-etc-ovs\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465036 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd4015f0-f1a7-40d7-ae69-089f74a6873d-scripts\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465074 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd4015f0-f1a7-40d7-ae69-089f74a6873d-var-run-ovn\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465143 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-scripts\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465195 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-var-run\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465220 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q5c2\" (UniqueName: \"kubernetes.io/projected/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-kube-api-access-5q5c2\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465279 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-var-lib\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465314 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqrz2\" (UniqueName: \"kubernetes.io/projected/cd4015f0-f1a7-40d7-ae69-089f74a6873d-kube-api-access-xqrz2\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465340 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4015f0-f1a7-40d7-ae69-089f74a6873d-combined-ca-bundle\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465362 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd4015f0-f1a7-40d7-ae69-089f74a6873d-var-run\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465412 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-var-log\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465430 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd4015f0-f1a7-40d7-ae69-089f74a6873d-var-log-ovn\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.465948 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd4015f0-f1a7-40d7-ae69-089f74a6873d-var-log-ovn\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.466349 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-etc-ovs\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.467459 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd4015f0-f1a7-40d7-ae69-089f74a6873d-var-run\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.467625 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-var-log\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.467844 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd4015f0-f1a7-40d7-ae69-089f74a6873d-var-run-ovn\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.467934 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-var-run\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.468507 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-var-lib\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.469013 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-scripts\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.473141 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4015f0-f1a7-40d7-ae69-089f74a6873d-ovn-controller-tls-certs\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.473401 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd4015f0-f1a7-40d7-ae69-089f74a6873d-scripts\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.474963 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4015f0-f1a7-40d7-ae69-089f74a6873d-combined-ca-bundle\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.481995 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqrz2\" (UniqueName: \"kubernetes.io/projected/cd4015f0-f1a7-40d7-ae69-089f74a6873d-kube-api-access-xqrz2\") pod \"ovn-controller-nrvng\" (UID: \"cd4015f0-f1a7-40d7-ae69-089f74a6873d\") " pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.482921 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535090-7v44h" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.483270 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q5c2\" (UniqueName: \"kubernetes.io/projected/8afc038e-11dc-4959-a6b0-61e9b1c2dc35-kube-api-access-5q5c2\") pod \"ovn-controller-ovs-gxnxl\" (UID: \"8afc038e-11dc-4959-a6b0-61e9b1c2dc35\") " pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.507772 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.553257 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nrvng" Feb 26 11:30:00 crc kubenswrapper[4699]: I0226 11:30:00.587786 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:03 crc kubenswrapper[4699]: I0226 11:30:03.511254 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f4a652b4-5b96-4ebf-81b4-df92846455bd","Type":"ContainerStarted","Data":"c653f2114aeba63b01bf441458d5ec8f8a6f7c0f66f8ee44c878928901c377ac"} Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.453423 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.457214 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.461496 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.461715 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-phqm4" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.461981 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.462173 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.462337 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.465483 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.542705 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef805480-81ec-4d0b-b2ca-06db4bf74383-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.542743 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ef805480-81ec-4d0b-b2ca-06db4bf74383-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.542773 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jf5w\" (UniqueName: \"kubernetes.io/projected/ef805480-81ec-4d0b-b2ca-06db4bf74383-kube-api-access-4jf5w\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.542846 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.542964 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef805480-81ec-4d0b-b2ca-06db4bf74383-config\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.543004 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef805480-81ec-4d0b-b2ca-06db4bf74383-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.543024 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef805480-81ec-4d0b-b2ca-06db4bf74383-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.543040 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef805480-81ec-4d0b-b2ca-06db4bf74383-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.646339 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef805480-81ec-4d0b-b2ca-06db4bf74383-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.646384 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ef805480-81ec-4d0b-b2ca-06db4bf74383-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.646409 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jf5w\" (UniqueName: \"kubernetes.io/projected/ef805480-81ec-4d0b-b2ca-06db4bf74383-kube-api-access-4jf5w\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.646426 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.646454 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef805480-81ec-4d0b-b2ca-06db4bf74383-config\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.646496 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef805480-81ec-4d0b-b2ca-06db4bf74383-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.646522 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef805480-81ec-4d0b-b2ca-06db4bf74383-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.646544 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef805480-81ec-4d0b-b2ca-06db4bf74383-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.647084 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.648311 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef805480-81ec-4d0b-b2ca-06db4bf74383-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.648324 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef805480-81ec-4d0b-b2ca-06db4bf74383-config\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.648917 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ef805480-81ec-4d0b-b2ca-06db4bf74383-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.652568 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.655074 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef805480-81ec-4d0b-b2ca-06db4bf74383-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.655091 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef805480-81ec-4d0b-b2ca-06db4bf74383-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.655866 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef805480-81ec-4d0b-b2ca-06db4bf74383-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.672811 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.676240 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6kllt" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.676487 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.676507 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jf5w\" (UniqueName: \"kubernetes.io/projected/ef805480-81ec-4d0b-b2ca-06db4bf74383-kube-api-access-4jf5w\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.677905 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.680993 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.685012 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.688412 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ef805480-81ec-4d0b-b2ca-06db4bf74383\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.784610 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.848821 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b981c8a5-ce76-4bc1-a018-28255391e3f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.848880 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b981c8a5-ce76-4bc1-a018-28255391e3f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.848939 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b981c8a5-ce76-4bc1-a018-28255391e3f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.849023 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b981c8a5-ce76-4bc1-a018-28255391e3f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.849058 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b981c8a5-ce76-4bc1-a018-28255391e3f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.849097 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.849146 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz9jn\" (UniqueName: \"kubernetes.io/projected/b981c8a5-ce76-4bc1-a018-28255391e3f2-kube-api-access-sz9jn\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.849175 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b981c8a5-ce76-4bc1-a018-28255391e3f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.951498 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b981c8a5-ce76-4bc1-a018-28255391e3f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.951665 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b981c8a5-ce76-4bc1-a018-28255391e3f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.951718 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b981c8a5-ce76-4bc1-a018-28255391e3f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.951770 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.951796 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz9jn\" (UniqueName: \"kubernetes.io/projected/b981c8a5-ce76-4bc1-a018-28255391e3f2-kube-api-access-sz9jn\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.951836 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b981c8a5-ce76-4bc1-a018-28255391e3f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.951887 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b981c8a5-ce76-4bc1-a018-28255391e3f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.951915 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b981c8a5-ce76-4bc1-a018-28255391e3f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.952141 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.953171 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b981c8a5-ce76-4bc1-a018-28255391e3f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.953648 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b981c8a5-ce76-4bc1-a018-28255391e3f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.954157 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b981c8a5-ce76-4bc1-a018-28255391e3f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.957912 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b981c8a5-ce76-4bc1-a018-28255391e3f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.968729 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b981c8a5-ce76-4bc1-a018-28255391e3f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.968985 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b981c8a5-ce76-4bc1-a018-28255391e3f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.977563 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz9jn\" (UniqueName: \"kubernetes.io/projected/b981c8a5-ce76-4bc1-a018-28255391e3f2-kube-api-access-sz9jn\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:04 crc kubenswrapper[4699]: I0226 11:30:04.979673 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b981c8a5-ce76-4bc1-a018-28255391e3f2\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:05 crc kubenswrapper[4699]: I0226 11:30:05.047300 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:07 crc kubenswrapper[4699]: E0226 11:30:07.740279 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 11:30:07 crc kubenswrapper[4699]: E0226 11:30:07.740787 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4949,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-65vml_openstack(3eb471e3-5e11-44a3-b3cd-176785c79d76): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:30:07 crc kubenswrapper[4699]: E0226 11:30:07.742088 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" podUID="3eb471e3-5e11-44a3-b3cd-176785c79d76" Feb 26 11:30:08 crc kubenswrapper[4699]: I0226 11:30:08.213537 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 11:30:08 crc kubenswrapper[4699]: I0226 11:30:08.318844 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:30:09 crc kubenswrapper[4699]: W0226 11:30:09.179713 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fdc6b6d_ac77_4179_9864_f220d622c0f4.slice/crio-180daa314286a6e53251b4285a9fb298155a547e2d8c9a8fe119ff5f5519e021 WatchSource:0}: Error finding container 180daa314286a6e53251b4285a9fb298155a547e2d8c9a8fe119ff5f5519e021: Status 404 returned error can't find the container with id 180daa314286a6e53251b4285a9fb298155a547e2d8c9a8fe119ff5f5519e021 Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.262069 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.428768 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-config\") pod \"3eb471e3-5e11-44a3-b3cd-176785c79d76\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.429280 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4949\" (UniqueName: \"kubernetes.io/projected/3eb471e3-5e11-44a3-b3cd-176785c79d76-kube-api-access-h4949\") pod \"3eb471e3-5e11-44a3-b3cd-176785c79d76\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.429390 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-dns-svc\") pod \"3eb471e3-5e11-44a3-b3cd-176785c79d76\" (UID: \"3eb471e3-5e11-44a3-b3cd-176785c79d76\") " Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.429274 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-config" (OuterVolumeSpecName: "config") pod "3eb471e3-5e11-44a3-b3cd-176785c79d76" (UID: "3eb471e3-5e11-44a3-b3cd-176785c79d76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.429877 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3eb471e3-5e11-44a3-b3cd-176785c79d76" (UID: "3eb471e3-5e11-44a3-b3cd-176785c79d76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.430072 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.430096 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb471e3-5e11-44a3-b3cd-176785c79d76-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.433900 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb471e3-5e11-44a3-b3cd-176785c79d76-kube-api-access-h4949" (OuterVolumeSpecName: "kube-api-access-h4949") pod "3eb471e3-5e11-44a3-b3cd-176785c79d76" (UID: "3eb471e3-5e11-44a3-b3cd-176785c79d76"). InnerVolumeSpecName "kube-api-access-h4949". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.495383 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.500959 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj"] Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.531723 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4949\" (UniqueName: \"kubernetes.io/projected/3eb471e3-5e11-44a3-b3cd-176785c79d76-kube-api-access-h4949\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:09 crc kubenswrapper[4699]: W0226 11:30:09.547298 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b298a96_eca9_49eb_a547_f88e986f326e.slice/crio-73fa3e6eb5ecf757d1fa8efa2ff8a94d5bbdeb6c98a459819b28cec05f8462fa WatchSource:0}: Error finding container 73fa3e6eb5ecf757d1fa8efa2ff8a94d5bbdeb6c98a459819b28cec05f8462fa: Status 404 returned error can't find the container with id 73fa3e6eb5ecf757d1fa8efa2ff8a94d5bbdeb6c98a459819b28cec05f8462fa Feb 26 11:30:09 crc kubenswrapper[4699]: W0226 11:30:09.549022 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6530fcf8_efdc_4f91_96cb_4f4bdc8bd1d2.slice/crio-015d7330286830e04c4c9f823e3781a8f3e132ae1217540e363d506a4ef6dc91 WatchSource:0}: Error finding container 015d7330286830e04c4c9f823e3781a8f3e132ae1217540e363d506a4ef6dc91: Status 404 returned error can't find the container with id 015d7330286830e04c4c9f823e3781a8f3e132ae1217540e363d506a4ef6dc91 Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.553662 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" event={"ID":"3eb471e3-5e11-44a3-b3cd-176785c79d76","Type":"ContainerDied","Data":"718ea2b3fa019be493e1d3c84030139b5efcc52cadefdd0f6d61e3832e93141d"} Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.553758 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-65vml" Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.558094 4699 generic.go:334] "Generic (PLEG): container finished" podID="9e16e518-0512-4df0-b8c7-1cd2f9c1e352" containerID="4e432431b1637ae6370a4d90a6195b4e6436618155eeb3ae004d2fe9bb9b8958" exitCode=0 Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.558187 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" event={"ID":"9e16e518-0512-4df0-b8c7-1cd2f9c1e352","Type":"ContainerDied","Data":"4e432431b1637ae6370a4d90a6195b4e6436618155eeb3ae004d2fe9bb9b8958"} Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.560742 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf","Type":"ContainerStarted","Data":"d71534977c30792b789d4e1ac180ec5af3f9ed3738ad0ab651747396010424ea"} Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.564241 4699 generic.go:334] "Generic (PLEG): container finished" podID="13838b5f-5f0e-44ba-8b63-97b4e20efbce" containerID="d9b6954a7b897d0e33bc9da8a8d509e007749e72356e89501274b2dcc58768fe" exitCode=0 Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.564290 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" event={"ID":"13838b5f-5f0e-44ba-8b63-97b4e20efbce","Type":"ContainerDied","Data":"d9b6954a7b897d0e33bc9da8a8d509e007749e72356e89501274b2dcc58768fe"} Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.582106 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" event={"ID":"44038b95-eefd-44cd-9781-0a2273605e75","Type":"ContainerStarted","Data":"80050d8650124cdda213563d70066e26f43de8d356825ac23d9b4fdfcc1d3b22"} Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.620390 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6fdc6b6d-ac77-4179-9864-f220d622c0f4","Type":"ContainerStarted","Data":"180daa314286a6e53251b4285a9fb298155a547e2d8c9a8fe119ff5f5519e021"} Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.698405 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-65vml"] Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.702532 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-65vml"] Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.725608 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.742752 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nrvng"] Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.833330 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 11:30:09 crc kubenswrapper[4699]: W0226 11:30:09.869091 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd4015f0_f1a7_40d7_ae69_089f74a6873d.slice/crio-e0ddd082de6aa81c77716d005cbb1ffb13e1a074048298bbd3dbcb794e695dc8 WatchSource:0}: Error finding container e0ddd082de6aa81c77716d005cbb1ffb13e1a074048298bbd3dbcb794e695dc8: Status 404 returned error can't find the container with id e0ddd082de6aa81c77716d005cbb1ffb13e1a074048298bbd3dbcb794e695dc8 Feb 26 11:30:09 crc kubenswrapper[4699]: I0226 11:30:09.937680 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535090-7v44h"] Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.033658 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.134741 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gxnxl"] Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.287828 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb471e3-5e11-44a3-b3cd-176785c79d76" path="/var/lib/kubelet/pods/3eb471e3-5e11-44a3-b3cd-176785c79d76/volumes" Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.578467 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.631456 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" event={"ID":"9e16e518-0512-4df0-b8c7-1cd2f9c1e352","Type":"ContainerStarted","Data":"fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.631526 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.633279 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d57084d-dc87-44e4-bbc8-50c402b7165b","Type":"ContainerStarted","Data":"4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.635399 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9","Type":"ContainerStarted","Data":"35748eb519ea6086c036c295d69a5cb3c52e1a34fc5cccd2cdf67e3b46b840e7"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.636497 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nrvng" event={"ID":"cd4015f0-f1a7-40d7-ae69-089f74a6873d","Type":"ContainerStarted","Data":"e0ddd082de6aa81c77716d005cbb1ffb13e1a074048298bbd3dbcb794e695dc8"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.637737 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b981c8a5-ce76-4bc1-a018-28255391e3f2","Type":"ContainerStarted","Data":"125649a15ff4d41dbc758db636b019e0dbee2d3932c70b8de8de8d9909f37601"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.638979 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2","Type":"ContainerStarted","Data":"015d7330286830e04c4c9f823e3781a8f3e132ae1217540e363d506a4ef6dc91"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.639834 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gxnxl" event={"ID":"8afc038e-11dc-4959-a6b0-61e9b1c2dc35","Type":"ContainerStarted","Data":"7a1471dd7a177467bafda49b607bab61b7b07a37e78e1062c61ad6831146cbf5"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.640997 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f4a652b4-5b96-4ebf-81b4-df92846455bd","Type":"ContainerStarted","Data":"01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.642786 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ef805480-81ec-4d0b-b2ca-06db4bf74383","Type":"ContainerStarted","Data":"12c7d2a9a29d81a322a2f794b5e7d85e9cdca114161fa1145fb259ecb38d8916"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.647531 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535090-7v44h" event={"ID":"a0d38a99-b56f-423c-9c5b-c8f726bf62f9","Type":"ContainerStarted","Data":"f0b01a3b5e5254bcfd2326338d3cde12bc7d1f83ca8ef3d4f65d618963dce401"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.649235 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" podStartSLOduration=2.952203198 podStartE2EDuration="20.649224467s" podCreationTimestamp="2026-02-26 11:29:50 +0000 UTC" firstStartedPulling="2026-02-26 11:29:51.53095071 +0000 UTC m=+1137.341777134" lastFinishedPulling="2026-02-26 11:30:09.227971929 +0000 UTC m=+1155.038798403" observedRunningTime="2026-02-26 11:30:10.646765638 +0000 UTC m=+1156.457592082" watchObservedRunningTime="2026-02-26 11:30:10.649224467 +0000 UTC m=+1156.460050901" Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.650278 4699 generic.go:334] "Generic (PLEG): container finished" podID="44038b95-eefd-44cd-9781-0a2273605e75" containerID="80050d8650124cdda213563d70066e26f43de8d356825ac23d9b4fdfcc1d3b22" exitCode=0 Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.650314 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" event={"ID":"44038b95-eefd-44cd-9781-0a2273605e75","Type":"ContainerDied","Data":"80050d8650124cdda213563d70066e26f43de8d356825ac23d9b4fdfcc1d3b22"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.650377 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" event={"ID":"44038b95-eefd-44cd-9781-0a2273605e75","Type":"ContainerDied","Data":"02fc15d2715e55f8c8cd19bc42d6cb612f93305db1f6bde0aa9d00c273dd8d8c"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.650390 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02fc15d2715e55f8c8cd19bc42d6cb612f93305db1f6bde0aa9d00c273dd8d8c" Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.653312 4699 generic.go:334] "Generic (PLEG): container finished" podID="9b298a96-eca9-49eb-a547-f88e986f326e" containerID="81dc18175a458a0d1e57583f805b2614af5b4f06183622336860874df0cedc4e" exitCode=0 Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.653362 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" event={"ID":"9b298a96-eca9-49eb-a547-f88e986f326e","Type":"ContainerDied","Data":"81dc18175a458a0d1e57583f805b2614af5b4f06183622336860874df0cedc4e"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.653390 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" event={"ID":"9b298a96-eca9-49eb-a547-f88e986f326e","Type":"ContainerStarted","Data":"73fa3e6eb5ecf757d1fa8efa2ff8a94d5bbdeb6c98a459819b28cec05f8462fa"} Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.675684 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.762524 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5r5x\" (UniqueName: \"kubernetes.io/projected/44038b95-eefd-44cd-9781-0a2273605e75-kube-api-access-d5r5x\") pod \"44038b95-eefd-44cd-9781-0a2273605e75\" (UID: \"44038b95-eefd-44cd-9781-0a2273605e75\") " Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.762670 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44038b95-eefd-44cd-9781-0a2273605e75-config\") pod \"44038b95-eefd-44cd-9781-0a2273605e75\" (UID: \"44038b95-eefd-44cd-9781-0a2273605e75\") " Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.768230 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44038b95-eefd-44cd-9781-0a2273605e75-kube-api-access-d5r5x" (OuterVolumeSpecName: "kube-api-access-d5r5x") pod "44038b95-eefd-44cd-9781-0a2273605e75" (UID: "44038b95-eefd-44cd-9781-0a2273605e75"). InnerVolumeSpecName "kube-api-access-d5r5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.780722 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44038b95-eefd-44cd-9781-0a2273605e75-config" (OuterVolumeSpecName: "config") pod "44038b95-eefd-44cd-9781-0a2273605e75" (UID: "44038b95-eefd-44cd-9781-0a2273605e75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.865057 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5r5x\" (UniqueName: \"kubernetes.io/projected/44038b95-eefd-44cd-9781-0a2273605e75-kube-api-access-d5r5x\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:10 crc kubenswrapper[4699]: I0226 11:30:10.865094 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44038b95-eefd-44cd-9781-0a2273605e75-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:11 crc kubenswrapper[4699]: I0226 11:30:11.584880 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:30:11 crc kubenswrapper[4699]: I0226 11:30:11.584944 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:30:11 crc kubenswrapper[4699]: I0226 11:30:11.664815 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" event={"ID":"13838b5f-5f0e-44ba-8b63-97b4e20efbce","Type":"ContainerStarted","Data":"66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800"} Feb 26 11:30:11 crc kubenswrapper[4699]: I0226 11:30:11.665057 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vn7l5" Feb 26 11:30:11 crc kubenswrapper[4699]: I0226 11:30:11.682356 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" podStartSLOduration=4.07180611 podStartE2EDuration="21.682336693s" podCreationTimestamp="2026-02-26 11:29:50 +0000 UTC" firstStartedPulling="2026-02-26 11:29:51.6512119 +0000 UTC m=+1137.462038334" lastFinishedPulling="2026-02-26 11:30:09.261742483 +0000 UTC m=+1155.072568917" observedRunningTime="2026-02-26 11:30:11.680421639 +0000 UTC m=+1157.491248083" watchObservedRunningTime="2026-02-26 11:30:11.682336693 +0000 UTC m=+1157.493163137" Feb 26 11:30:11 crc kubenswrapper[4699]: I0226 11:30:11.737036 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn7l5"] Feb 26 11:30:11 crc kubenswrapper[4699]: I0226 11:30:11.747441 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vn7l5"] Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.272396 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44038b95-eefd-44cd-9781-0a2273605e75" path="/var/lib/kubelet/pods/44038b95-eefd-44cd-9781-0a2273605e75/volumes" Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.493635 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.594992 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b298a96-eca9-49eb-a547-f88e986f326e-config-volume\") pod \"9b298a96-eca9-49eb-a547-f88e986f326e\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.595044 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b298a96-eca9-49eb-a547-f88e986f326e-secret-volume\") pod \"9b298a96-eca9-49eb-a547-f88e986f326e\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.595151 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvx8x\" (UniqueName: \"kubernetes.io/projected/9b298a96-eca9-49eb-a547-f88e986f326e-kube-api-access-vvx8x\") pod \"9b298a96-eca9-49eb-a547-f88e986f326e\" (UID: \"9b298a96-eca9-49eb-a547-f88e986f326e\") " Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.596766 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b298a96-eca9-49eb-a547-f88e986f326e-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b298a96-eca9-49eb-a547-f88e986f326e" (UID: "9b298a96-eca9-49eb-a547-f88e986f326e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.600780 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b298a96-eca9-49eb-a547-f88e986f326e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b298a96-eca9-49eb-a547-f88e986f326e" (UID: "9b298a96-eca9-49eb-a547-f88e986f326e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.609982 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b298a96-eca9-49eb-a547-f88e986f326e-kube-api-access-vvx8x" (OuterVolumeSpecName: "kube-api-access-vvx8x") pod "9b298a96-eca9-49eb-a547-f88e986f326e" (UID: "9b298a96-eca9-49eb-a547-f88e986f326e"). InnerVolumeSpecName "kube-api-access-vvx8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.671001 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" event={"ID":"9b298a96-eca9-49eb-a547-f88e986f326e","Type":"ContainerDied","Data":"73fa3e6eb5ecf757d1fa8efa2ff8a94d5bbdeb6c98a459819b28cec05f8462fa"} Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.671062 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73fa3e6eb5ecf757d1fa8efa2ff8a94d5bbdeb6c98a459819b28cec05f8462fa" Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.671025 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj" Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.671525 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.698348 4699 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b298a96-eca9-49eb-a547-f88e986f326e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.698381 4699 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b298a96-eca9-49eb-a547-f88e986f326e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:12 crc kubenswrapper[4699]: I0226 11:30:12.698393 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvx8x\" (UniqueName: \"kubernetes.io/projected/9b298a96-eca9-49eb-a547-f88e986f326e-kube-api-access-vvx8x\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:15 crc kubenswrapper[4699]: I0226 11:30:15.891636 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:30:16 crc kubenswrapper[4699]: I0226 11:30:16.168352 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:30:16 crc kubenswrapper[4699]: I0226 11:30:16.224250 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpxrq"] Feb 26 11:30:16 crc kubenswrapper[4699]: I0226 11:30:16.707727 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" podUID="9e16e518-0512-4df0-b8c7-1cd2f9c1e352" containerName="dnsmasq-dns" containerID="cri-o://fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6" gracePeriod=10 Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.183468 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.284166 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-config\") pod \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.284719 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khq7k\" (UniqueName: \"kubernetes.io/projected/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-kube-api-access-khq7k\") pod \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.284752 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-dns-svc\") pod \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\" (UID: \"9e16e518-0512-4df0-b8c7-1cd2f9c1e352\") " Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.291272 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-kube-api-access-khq7k" (OuterVolumeSpecName: "kube-api-access-khq7k") pod "9e16e518-0512-4df0-b8c7-1cd2f9c1e352" (UID: "9e16e518-0512-4df0-b8c7-1cd2f9c1e352"). InnerVolumeSpecName "kube-api-access-khq7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.348961 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e16e518-0512-4df0-b8c7-1cd2f9c1e352" (UID: "9e16e518-0512-4df0-b8c7-1cd2f9c1e352"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.359677 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-config" (OuterVolumeSpecName: "config") pod "9e16e518-0512-4df0-b8c7-1cd2f9c1e352" (UID: "9e16e518-0512-4df0-b8c7-1cd2f9c1e352"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.386461 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.386483 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khq7k\" (UniqueName: \"kubernetes.io/projected/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-kube-api-access-khq7k\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.386495 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e16e518-0512-4df0-b8c7-1cd2f9c1e352-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.716517 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ef805480-81ec-4d0b-b2ca-06db4bf74383","Type":"ContainerStarted","Data":"83de79cf35fb56cc25b9ade694c104cc19ad051e503fbae8961ea45a36867761"} Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.722608 4699 generic.go:334] "Generic (PLEG): container finished" podID="a0d38a99-b56f-423c-9c5b-c8f726bf62f9" containerID="02c1126ec0d166bfd6091e444f16da2788ee1d75f58864b8bc99a6f2547f9104" exitCode=0 Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.722742 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535090-7v44h" event={"ID":"a0d38a99-b56f-423c-9c5b-c8f726bf62f9","Type":"ContainerDied","Data":"02c1126ec0d166bfd6091e444f16da2788ee1d75f58864b8bc99a6f2547f9104"} Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.724942 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nrvng" event={"ID":"cd4015f0-f1a7-40d7-ae69-089f74a6873d","Type":"ContainerStarted","Data":"6daa0e89b9d465ed2a671b70b807249f8398d2cac3d7fa7e605d52f5a2b8b1c9"} Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.726105 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nrvng" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.729737 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b981c8a5-ce76-4bc1-a018-28255391e3f2","Type":"ContainerStarted","Data":"f5b08dafae9646ed5b59889aec03efa40dde95cfd3131c9d3c0a40ce48338bd5"} Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.732599 4699 generic.go:334] "Generic (PLEG): container finished" podID="9e16e518-0512-4df0-b8c7-1cd2f9c1e352" containerID="fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6" exitCode=0 Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.732669 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" event={"ID":"9e16e518-0512-4df0-b8c7-1cd2f9c1e352","Type":"ContainerDied","Data":"fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6"} Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.732693 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" event={"ID":"9e16e518-0512-4df0-b8c7-1cd2f9c1e352","Type":"ContainerDied","Data":"2aaa9042481814730657b40428f86e835d8db2be305a9194e255d49a0c3e4409"} Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.732710 4699 scope.go:117] "RemoveContainer" containerID="fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.732851 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vpxrq" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.735977 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf","Type":"ContainerStarted","Data":"d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b"} Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.736422 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.742298 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2","Type":"ContainerStarted","Data":"d4e69267c485636aa7f9c0d96e2f0273d578817d607b0b5383e8f27e20ec9d5b"} Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.743539 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.748428 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gxnxl" event={"ID":"8afc038e-11dc-4959-a6b0-61e9b1c2dc35","Type":"ContainerStarted","Data":"30fd8efce92530234f9e98449df002b51929f5b86050e02a5e9ce686fe6ee5d5"} Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.753679 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6fdc6b6d-ac77-4179-9864-f220d622c0f4","Type":"ContainerStarted","Data":"bc8cfe4cbc14669a7d23e320ce251a249775b1b75e5ecb548ae10249e750c023"} Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.757960 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9","Type":"ContainerStarted","Data":"bf6fd25fd5c5219234878667bbfc768fc4a7fc9b607b1bbc5dba75d0bb38306a"} Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.760479 4699 scope.go:117] "RemoveContainer" containerID="4e432431b1637ae6370a4d90a6195b4e6436618155eeb3ae004d2fe9bb9b8958" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.765033 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.018325433 podStartE2EDuration="20.765007959s" podCreationTimestamp="2026-02-26 11:29:57 +0000 UTC" firstStartedPulling="2026-02-26 11:30:09.210470084 +0000 UTC m=+1155.021296518" lastFinishedPulling="2026-02-26 11:30:16.95715261 +0000 UTC m=+1162.767979044" observedRunningTime="2026-02-26 11:30:17.758366267 +0000 UTC m=+1163.569192721" watchObservedRunningTime="2026-02-26 11:30:17.765007959 +0000 UTC m=+1163.575834403" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.785215 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nrvng" podStartSLOduration=10.767181605 podStartE2EDuration="17.785190322s" podCreationTimestamp="2026-02-26 11:30:00 +0000 UTC" firstStartedPulling="2026-02-26 11:30:09.871089309 +0000 UTC m=+1155.681915733" lastFinishedPulling="2026-02-26 11:30:16.889098016 +0000 UTC m=+1162.699924450" observedRunningTime="2026-02-26 11:30:17.776640575 +0000 UTC m=+1163.587467019" watchObservedRunningTime="2026-02-26 11:30:17.785190322 +0000 UTC m=+1163.596016756" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.794346 4699 scope.go:117] "RemoveContainer" containerID="fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6" Feb 26 11:30:17 crc kubenswrapper[4699]: E0226 11:30:17.794795 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6\": container with ID starting with fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6 not found: ID does not exist" containerID="fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.794833 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6"} err="failed to get container status \"fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6\": rpc error: code = NotFound desc = could not find container \"fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6\": container with ID starting with fbd96fb959f10fe807f04f188f2e74e23275f80b68cbba2481fd65c49eae67f6 not found: ID does not exist" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.794855 4699 scope.go:117] "RemoveContainer" containerID="4e432431b1637ae6370a4d90a6195b4e6436618155eeb3ae004d2fe9bb9b8958" Feb 26 11:30:17 crc kubenswrapper[4699]: E0226 11:30:17.798301 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e432431b1637ae6370a4d90a6195b4e6436618155eeb3ae004d2fe9bb9b8958\": container with ID starting with 4e432431b1637ae6370a4d90a6195b4e6436618155eeb3ae004d2fe9bb9b8958 not found: ID does not exist" containerID="4e432431b1637ae6370a4d90a6195b4e6436618155eeb3ae004d2fe9bb9b8958" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.798346 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e432431b1637ae6370a4d90a6195b4e6436618155eeb3ae004d2fe9bb9b8958"} err="failed to get container status \"4e432431b1637ae6370a4d90a6195b4e6436618155eeb3ae004d2fe9bb9b8958\": rpc error: code = NotFound desc = could not find container \"4e432431b1637ae6370a4d90a6195b4e6436618155eeb3ae004d2fe9bb9b8958\": container with ID starting with 4e432431b1637ae6370a4d90a6195b4e6436618155eeb3ae004d2fe9bb9b8958 not found: ID does not exist" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.835647 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.984357537 podStartE2EDuration="23.835628549s" podCreationTimestamp="2026-02-26 11:29:54 +0000 UTC" firstStartedPulling="2026-02-26 11:30:09.552096582 +0000 UTC m=+1155.362923026" lastFinishedPulling="2026-02-26 11:30:16.403367604 +0000 UTC m=+1162.214194038" observedRunningTime="2026-02-26 11:30:17.831580302 +0000 UTC m=+1163.642406746" watchObservedRunningTime="2026-02-26 11:30:17.835628549 +0000 UTC m=+1163.646454983" Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.886619 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpxrq"] Feb 26 11:30:17 crc kubenswrapper[4699]: I0226 11:30:17.894752 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpxrq"] Feb 26 11:30:18 crc kubenswrapper[4699]: I0226 11:30:18.269865 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e16e518-0512-4df0-b8c7-1cd2f9c1e352" path="/var/lib/kubelet/pods/9e16e518-0512-4df0-b8c7-1cd2f9c1e352/volumes" Feb 26 11:30:18 crc kubenswrapper[4699]: I0226 11:30:18.775655 4699 generic.go:334] "Generic (PLEG): container finished" podID="8afc038e-11dc-4959-a6b0-61e9b1c2dc35" containerID="30fd8efce92530234f9e98449df002b51929f5b86050e02a5e9ce686fe6ee5d5" exitCode=0 Feb 26 11:30:18 crc kubenswrapper[4699]: I0226 11:30:18.775722 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gxnxl" event={"ID":"8afc038e-11dc-4959-a6b0-61e9b1c2dc35","Type":"ContainerDied","Data":"30fd8efce92530234f9e98449df002b51929f5b86050e02a5e9ce686fe6ee5d5"} Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.151445 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535090-7v44h" Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.318472 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmg6x\" (UniqueName: \"kubernetes.io/projected/a0d38a99-b56f-423c-9c5b-c8f726bf62f9-kube-api-access-jmg6x\") pod \"a0d38a99-b56f-423c-9c5b-c8f726bf62f9\" (UID: \"a0d38a99-b56f-423c-9c5b-c8f726bf62f9\") " Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.322103 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d38a99-b56f-423c-9c5b-c8f726bf62f9-kube-api-access-jmg6x" (OuterVolumeSpecName: "kube-api-access-jmg6x") pod "a0d38a99-b56f-423c-9c5b-c8f726bf62f9" (UID: "a0d38a99-b56f-423c-9c5b-c8f726bf62f9"). InnerVolumeSpecName "kube-api-access-jmg6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.420358 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmg6x\" (UniqueName: \"kubernetes.io/projected/a0d38a99-b56f-423c-9c5b-c8f726bf62f9-kube-api-access-jmg6x\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.786105 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ef805480-81ec-4d0b-b2ca-06db4bf74383","Type":"ContainerStarted","Data":"9946260ac01a55e7f1ab3f7896c759d9a52ef25112fb9dc7797036b2c1f1bc10"} Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.788662 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535090-7v44h" Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.788661 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535090-7v44h" event={"ID":"a0d38a99-b56f-423c-9c5b-c8f726bf62f9","Type":"ContainerDied","Data":"f0b01a3b5e5254bcfd2326338d3cde12bc7d1f83ca8ef3d4f65d618963dce401"} Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.788816 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0b01a3b5e5254bcfd2326338d3cde12bc7d1f83ca8ef3d4f65d618963dce401" Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.793416 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b981c8a5-ce76-4bc1-a018-28255391e3f2","Type":"ContainerStarted","Data":"d91c2a24145839adbe8cd440ccb150622ce74f6cb1b8640ea93e964ccb3524cc"} Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.797930 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gxnxl" event={"ID":"8afc038e-11dc-4959-a6b0-61e9b1c2dc35","Type":"ContainerStarted","Data":"cb74f1ae25097fb7fa82c7a7adf3297259141728c67285fb021aa734c2697509"} Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.797981 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gxnxl" event={"ID":"8afc038e-11dc-4959-a6b0-61e9b1c2dc35","Type":"ContainerStarted","Data":"3eebae866234458e23d0849a009bd08fb6aa50dedf47a3fb2fc906687ff99310"} Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.798000 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.798568 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.816777 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.117685905 podStartE2EDuration="16.81675353s" podCreationTimestamp="2026-02-26 11:30:03 +0000 UTC" firstStartedPulling="2026-02-26 11:30:10.585249989 +0000 UTC m=+1156.396076423" lastFinishedPulling="2026-02-26 11:30:19.284317614 +0000 UTC m=+1165.095144048" observedRunningTime="2026-02-26 11:30:19.810293784 +0000 UTC m=+1165.621120218" watchObservedRunningTime="2026-02-26 11:30:19.81675353 +0000 UTC m=+1165.627579964" Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.838954 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-gxnxl" podStartSLOduration=13.547131424 podStartE2EDuration="19.838929791s" podCreationTimestamp="2026-02-26 11:30:00 +0000 UTC" firstStartedPulling="2026-02-26 11:30:10.589263572 +0000 UTC m=+1156.400090006" lastFinishedPulling="2026-02-26 11:30:16.881061949 +0000 UTC m=+1162.691888373" observedRunningTime="2026-02-26 11:30:19.837250902 +0000 UTC m=+1165.648077346" watchObservedRunningTime="2026-02-26 11:30:19.838929791 +0000 UTC m=+1165.649756235" Feb 26 11:30:19 crc kubenswrapper[4699]: I0226 11:30:19.867262 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.554973604 podStartE2EDuration="16.867240158s" podCreationTimestamp="2026-02-26 11:30:03 +0000 UTC" firstStartedPulling="2026-02-26 11:30:09.978433654 +0000 UTC m=+1155.789260088" lastFinishedPulling="2026-02-26 11:30:19.290700208 +0000 UTC m=+1165.101526642" observedRunningTime="2026-02-26 11:30:19.861687658 +0000 UTC m=+1165.672514102" watchObservedRunningTime="2026-02-26 11:30:19.867240158 +0000 UTC m=+1165.678066592" Feb 26 11:30:20 crc kubenswrapper[4699]: I0226 11:30:20.048070 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:20 crc kubenswrapper[4699]: I0226 11:30:20.048242 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:20 crc kubenswrapper[4699]: I0226 11:30:20.088981 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:20 crc kubenswrapper[4699]: I0226 11:30:20.213422 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535084-h8xlt"] Feb 26 11:30:20 crc kubenswrapper[4699]: I0226 11:30:20.219009 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535084-h8xlt"] Feb 26 11:30:20 crc kubenswrapper[4699]: I0226 11:30:20.269973 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d6d072-33c5-4660-b6c3-80344c215e6a" path="/var/lib/kubelet/pods/98d6d072-33c5-4660-b6c3-80344c215e6a/volumes" Feb 26 11:30:21 crc kubenswrapper[4699]: I0226 11:30:21.224910 4699 scope.go:117] "RemoveContainer" containerID="dd76d54940753753e3f7a2683a8c241e99cd1928bc9d5ed547595d83c46f6f57" Feb 26 11:30:22 crc kubenswrapper[4699]: I0226 11:30:22.785327 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:22 crc kubenswrapper[4699]: I0226 11:30:22.822266 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:22 crc kubenswrapper[4699]: I0226 11:30:22.823840 4699 generic.go:334] "Generic (PLEG): container finished" podID="6fdc6b6d-ac77-4179-9864-f220d622c0f4" containerID="bc8cfe4cbc14669a7d23e320ce251a249775b1b75e5ecb548ae10249e750c023" exitCode=0 Feb 26 11:30:22 crc kubenswrapper[4699]: I0226 11:30:22.823895 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6fdc6b6d-ac77-4179-9864-f220d622c0f4","Type":"ContainerDied","Data":"bc8cfe4cbc14669a7d23e320ce251a249775b1b75e5ecb548ae10249e750c023"} Feb 26 11:30:22 crc kubenswrapper[4699]: I0226 11:30:22.824445 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:22 crc kubenswrapper[4699]: I0226 11:30:22.870521 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.117618 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-87kkf"] Feb 26 11:30:23 crc kubenswrapper[4699]: E0226 11:30:23.118349 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e16e518-0512-4df0-b8c7-1cd2f9c1e352" containerName="dnsmasq-dns" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.118370 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e16e518-0512-4df0-b8c7-1cd2f9c1e352" containerName="dnsmasq-dns" Feb 26 11:30:23 crc kubenswrapper[4699]: E0226 11:30:23.118409 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e16e518-0512-4df0-b8c7-1cd2f9c1e352" containerName="init" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.118418 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e16e518-0512-4df0-b8c7-1cd2f9c1e352" containerName="init" Feb 26 11:30:23 crc kubenswrapper[4699]: E0226 11:30:23.118441 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44038b95-eefd-44cd-9781-0a2273605e75" containerName="init" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.118448 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="44038b95-eefd-44cd-9781-0a2273605e75" containerName="init" Feb 26 11:30:23 crc kubenswrapper[4699]: E0226 11:30:23.118474 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b298a96-eca9-49eb-a547-f88e986f326e" containerName="collect-profiles" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.118481 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b298a96-eca9-49eb-a547-f88e986f326e" containerName="collect-profiles" Feb 26 11:30:23 crc kubenswrapper[4699]: E0226 11:30:23.118492 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d38a99-b56f-423c-9c5b-c8f726bf62f9" containerName="oc" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.118500 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d38a99-b56f-423c-9c5b-c8f726bf62f9" containerName="oc" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.118695 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="44038b95-eefd-44cd-9781-0a2273605e75" containerName="init" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.118713 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d38a99-b56f-423c-9c5b-c8f726bf62f9" containerName="oc" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.118724 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b298a96-eca9-49eb-a547-f88e986f326e" containerName="collect-profiles" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.118734 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e16e518-0512-4df0-b8c7-1cd2f9c1e352" containerName="dnsmasq-dns" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.119871 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.122213 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.130595 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-87kkf"] Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.176013 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qfxsz"] Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.177328 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.181236 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.200406 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qfxsz"] Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.283139 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4767003-9eba-4b86-933c-5bcbaa93e458-combined-ca-bundle\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.283195 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4767003-9eba-4b86-933c-5bcbaa93e458-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.283436 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4767003-9eba-4b86-933c-5bcbaa93e458-config\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.283528 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a4767003-9eba-4b86-933c-5bcbaa93e458-ovn-rundir\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.283614 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9zb\" (UniqueName: \"kubernetes.io/projected/e0f71319-4adc-48a8-82d1-29a8a6bb7500-kube-api-access-8n9zb\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.283679 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.283787 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a4767003-9eba-4b86-933c-5bcbaa93e458-ovs-rundir\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.283839 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9pzj\" (UniqueName: \"kubernetes.io/projected/a4767003-9eba-4b86-933c-5bcbaa93e458-kube-api-access-g9pzj\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.283944 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-config\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.284019 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.386762 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4767003-9eba-4b86-933c-5bcbaa93e458-config\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.386835 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a4767003-9eba-4b86-933c-5bcbaa93e458-ovn-rundir\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.386912 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9zb\" (UniqueName: \"kubernetes.io/projected/e0f71319-4adc-48a8-82d1-29a8a6bb7500-kube-api-access-8n9zb\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.386964 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.387012 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a4767003-9eba-4b86-933c-5bcbaa93e458-ovs-rundir\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.387064 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9pzj\" (UniqueName: \"kubernetes.io/projected/a4767003-9eba-4b86-933c-5bcbaa93e458-kube-api-access-g9pzj\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.387163 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-config\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.387216 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.387263 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4767003-9eba-4b86-933c-5bcbaa93e458-combined-ca-bundle\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.387279 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a4767003-9eba-4b86-933c-5bcbaa93e458-ovn-rundir\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.387279 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a4767003-9eba-4b86-933c-5bcbaa93e458-ovs-rundir\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.387365 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4767003-9eba-4b86-933c-5bcbaa93e458-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.388029 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-config\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.388037 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.388499 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.388852 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4767003-9eba-4b86-933c-5bcbaa93e458-config\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.392985 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4767003-9eba-4b86-933c-5bcbaa93e458-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.394823 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4767003-9eba-4b86-933c-5bcbaa93e458-combined-ca-bundle\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.406980 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9zb\" (UniqueName: \"kubernetes.io/projected/e0f71319-4adc-48a8-82d1-29a8a6bb7500-kube-api-access-8n9zb\") pod \"dnsmasq-dns-5bf47b49b7-87kkf\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.413024 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9pzj\" (UniqueName: \"kubernetes.io/projected/a4767003-9eba-4b86-933c-5bcbaa93e458-kube-api-access-g9pzj\") pod \"ovn-controller-metrics-qfxsz\" (UID: \"a4767003-9eba-4b86-933c-5bcbaa93e458\") " pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.446464 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.484301 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-87kkf"] Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.498826 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qfxsz" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.522370 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-rx5rp"] Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.525362 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.529892 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.546485 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-rx5rp"] Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.591165 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-config\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.591253 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqzz\" (UniqueName: \"kubernetes.io/projected/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-kube-api-access-mjqzz\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.591284 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-dns-svc\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.591339 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.591386 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.693329 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-config\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.693426 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqzz\" (UniqueName: \"kubernetes.io/projected/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-kube-api-access-mjqzz\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.693457 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-dns-svc\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.693517 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.693560 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.694757 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-config\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.694896 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-dns-svc\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.694920 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.695432 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.714884 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqzz\" (UniqueName: \"kubernetes.io/projected/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-kube-api-access-mjqzz\") pod \"dnsmasq-dns-8554648995-rx5rp\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.845732 4699 generic.go:334] "Generic (PLEG): container finished" podID="edce8e75-6dd5-4fbd-8f76-bc6553cc27b9" containerID="bf6fd25fd5c5219234878667bbfc768fc4a7fc9b607b1bbc5dba75d0bb38306a" exitCode=0 Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.846266 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9","Type":"ContainerDied","Data":"bf6fd25fd5c5219234878667bbfc768fc4a7fc9b607b1bbc5dba75d0bb38306a"} Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.849955 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:23 crc kubenswrapper[4699]: I0226 11:30:23.986636 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-87kkf"] Feb 26 11:30:24 crc kubenswrapper[4699]: I0226 11:30:24.088802 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qfxsz"] Feb 26 11:30:24 crc kubenswrapper[4699]: I0226 11:30:24.358162 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-rx5rp"] Feb 26 11:30:24 crc kubenswrapper[4699]: W0226 11:30:24.359211 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84a51a2d_7b6c_4a4a_849f_7f02bcbaf87a.slice/crio-3c99596d39f423554ef17ae2aa77a54967d1e95b6f6bf8ba86b76cfbeba577ff WatchSource:0}: Error finding container 3c99596d39f423554ef17ae2aa77a54967d1e95b6f6bf8ba86b76cfbeba577ff: Status 404 returned error can't find the container with id 3c99596d39f423554ef17ae2aa77a54967d1e95b6f6bf8ba86b76cfbeba577ff Feb 26 11:30:24 crc kubenswrapper[4699]: I0226 11:30:24.853589 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" event={"ID":"e0f71319-4adc-48a8-82d1-29a8a6bb7500","Type":"ContainerStarted","Data":"f70b3a342001c7db5b4059fb06e2519c604846c94f573dd9ba11e049e0643348"} Feb 26 11:30:24 crc kubenswrapper[4699]: I0226 11:30:24.854955 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qfxsz" event={"ID":"a4767003-9eba-4b86-933c-5bcbaa93e458","Type":"ContainerStarted","Data":"984cafebc7a4395333e2636a2f692969021197cb222ff4a24278d12bd1a90320"} Feb 26 11:30:24 crc kubenswrapper[4699]: I0226 11:30:24.857247 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-rx5rp" event={"ID":"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a","Type":"ContainerStarted","Data":"3c99596d39f423554ef17ae2aa77a54967d1e95b6f6bf8ba86b76cfbeba577ff"} Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.083431 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.183944 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.223833 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.225315 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.238215 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.238269 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-plkfw" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.238519 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.238717 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.241261 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.322965 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fbd47d6-02c1-4ac4-a981-231eb0f13530-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.323051 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fbd47d6-02c1-4ac4-a981-231eb0f13530-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.323072 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbd47d6-02c1-4ac4-a981-231eb0f13530-config\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.323104 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fbd47d6-02c1-4ac4-a981-231eb0f13530-scripts\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.323285 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4rrl\" (UniqueName: \"kubernetes.io/projected/8fbd47d6-02c1-4ac4-a981-231eb0f13530-kube-api-access-r4rrl\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.323314 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbd47d6-02c1-4ac4-a981-231eb0f13530-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.323371 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8fbd47d6-02c1-4ac4-a981-231eb0f13530-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.425264 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fbd47d6-02c1-4ac4-a981-231eb0f13530-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.425311 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbd47d6-02c1-4ac4-a981-231eb0f13530-config\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.425344 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fbd47d6-02c1-4ac4-a981-231eb0f13530-scripts\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.425368 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4rrl\" (UniqueName: \"kubernetes.io/projected/8fbd47d6-02c1-4ac4-a981-231eb0f13530-kube-api-access-r4rrl\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.425388 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbd47d6-02c1-4ac4-a981-231eb0f13530-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.425444 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8fbd47d6-02c1-4ac4-a981-231eb0f13530-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.425552 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fbd47d6-02c1-4ac4-a981-231eb0f13530-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.426345 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8fbd47d6-02c1-4ac4-a981-231eb0f13530-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.426373 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbd47d6-02c1-4ac4-a981-231eb0f13530-config\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.426483 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fbd47d6-02c1-4ac4-a981-231eb0f13530-scripts\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.430833 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fbd47d6-02c1-4ac4-a981-231eb0f13530-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.430971 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbd47d6-02c1-4ac4-a981-231eb0f13530-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.434740 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fbd47d6-02c1-4ac4-a981-231eb0f13530-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.455978 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4rrl\" (UniqueName: \"kubernetes.io/projected/8fbd47d6-02c1-4ac4-a981-231eb0f13530-kube-api-access-r4rrl\") pod \"ovn-northd-0\" (UID: \"8fbd47d6-02c1-4ac4-a981-231eb0f13530\") " pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.554907 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 11:30:25 crc kubenswrapper[4699]: I0226 11:30:25.975982 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 11:30:25 crc kubenswrapper[4699]: W0226 11:30:25.976399 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fbd47d6_02c1_4ac4_a981_231eb0f13530.slice/crio-bb93ad59326b9fe455fd100fa7d74a387c24b573a58b8705cf87c3955f4720de WatchSource:0}: Error finding container bb93ad59326b9fe455fd100fa7d74a387c24b573a58b8705cf87c3955f4720de: Status 404 returned error can't find the container with id bb93ad59326b9fe455fd100fa7d74a387c24b573a58b8705cf87c3955f4720de Feb 26 11:30:26 crc kubenswrapper[4699]: I0226 11:30:26.870333 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8fbd47d6-02c1-4ac4-a981-231eb0f13530","Type":"ContainerStarted","Data":"bb93ad59326b9fe455fd100fa7d74a387c24b573a58b8705cf87c3955f4720de"} Feb 26 11:30:27 crc kubenswrapper[4699]: I0226 11:30:27.717331 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 11:30:27 crc kubenswrapper[4699]: I0226 11:30:27.804451 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-rx5rp"] Feb 26 11:30:27 crc kubenswrapper[4699]: I0226 11:30:27.829625 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6nf48"] Feb 26 11:30:27 crc kubenswrapper[4699]: I0226 11:30:27.830971 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:27 crc kubenswrapper[4699]: I0226 11:30:27.850214 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6nf48"] Feb 26 11:30:27 crc kubenswrapper[4699]: I0226 11:30:27.973355 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-config\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:27 crc kubenswrapper[4699]: I0226 11:30:27.973527 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:27 crc kubenswrapper[4699]: I0226 11:30:27.973596 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:27 crc kubenswrapper[4699]: I0226 11:30:27.973660 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:27 crc kubenswrapper[4699]: I0226 11:30:27.973729 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pjrg\" (UniqueName: \"kubernetes.io/projected/2a166832-199a-436c-85a2-4ccde527f180-kube-api-access-7pjrg\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.075534 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.075596 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.075666 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pjrg\" (UniqueName: \"kubernetes.io/projected/2a166832-199a-436c-85a2-4ccde527f180-kube-api-access-7pjrg\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.075753 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-config\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.075777 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.076886 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.076893 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-config\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.077049 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.077049 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.106291 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pjrg\" (UniqueName: \"kubernetes.io/projected/2a166832-199a-436c-85a2-4ccde527f180-kube-api-access-7pjrg\") pod \"dnsmasq-dns-b8fbc5445-6nf48\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.149823 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.616919 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6nf48"] Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.885332 4699 generic.go:334] "Generic (PLEG): container finished" podID="84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a" containerID="24d9e2bdd993f65b648a62785a8a9bb52bde1911788d2bc3af7f542b158faa63" exitCode=0 Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.885392 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-rx5rp" event={"ID":"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a","Type":"ContainerDied","Data":"24d9e2bdd993f65b648a62785a8a9bb52bde1911788d2bc3af7f542b158faa63"} Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.887775 4699 generic.go:334] "Generic (PLEG): container finished" podID="e0f71319-4adc-48a8-82d1-29a8a6bb7500" containerID="445ab4d3ee3c89f4634bcfb0a33d6ea9b7825c4b93d1fd1727ebec918c7cc6e0" exitCode=0 Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.887848 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" event={"ID":"e0f71319-4adc-48a8-82d1-29a8a6bb7500","Type":"ContainerDied","Data":"445ab4d3ee3c89f4634bcfb0a33d6ea9b7825c4b93d1fd1727ebec918c7cc6e0"} Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.890806 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6fdc6b6d-ac77-4179-9864-f220d622c0f4","Type":"ContainerStarted","Data":"2b3636f054dda5285e4c35de5b8f9641752e1f9f5af5a4146b6d4cb34172fda2"} Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.894255 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"edce8e75-6dd5-4fbd-8f76-bc6553cc27b9","Type":"ContainerStarted","Data":"787e4940aeb2a74392a4a5643cd807e47e393353cd12ad5bb452113b610b3397"} Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.896041 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qfxsz" event={"ID":"a4767003-9eba-4b86-933c-5bcbaa93e458","Type":"ContainerStarted","Data":"f01ac35f9fdf0369d52cce7cc5e603e07f36c304c89816ec522cd67b395bcec5"} Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.945606 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.955902422 podStartE2EDuration="35.945571917s" podCreationTimestamp="2026-02-26 11:29:53 +0000 UTC" firstStartedPulling="2026-02-26 11:30:09.900597874 +0000 UTC m=+1155.711424308" lastFinishedPulling="2026-02-26 11:30:16.890267379 +0000 UTC m=+1162.701093803" observedRunningTime="2026-02-26 11:30:28.944136715 +0000 UTC m=+1174.754963159" watchObservedRunningTime="2026-02-26 11:30:28.945571917 +0000 UTC m=+1174.756398351" Feb 26 11:30:28 crc kubenswrapper[4699]: I0226 11:30:28.968963 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qfxsz" podStartSLOduration=5.968933252 podStartE2EDuration="5.968933252s" podCreationTimestamp="2026-02-26 11:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:30:28.959299543 +0000 UTC m=+1174.770125977" watchObservedRunningTime="2026-02-26 11:30:28.968933252 +0000 UTC m=+1174.779759686" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.017623 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.024971 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.029650 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.029696 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.029934 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-z4964" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.029650 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.035402 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.363270932 podStartE2EDuration="37.03537153s" podCreationTimestamp="2026-02-26 11:29:52 +0000 UTC" firstStartedPulling="2026-02-26 11:30:09.208959551 +0000 UTC m=+1155.019785985" lastFinishedPulling="2026-02-26 11:30:16.881060149 +0000 UTC m=+1162.691886583" observedRunningTime="2026-02-26 11:30:29.017595357 +0000 UTC m=+1174.828421801" watchObservedRunningTime="2026-02-26 11:30:29.03537153 +0000 UTC m=+1174.846197964" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.058673 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.100726 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.100815 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.100862 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-lock\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.100888 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.100940 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-cache\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.100961 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75z7t\" (UniqueName: \"kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-kube-api-access-75z7t\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.208388 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.208466 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-lock\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.208497 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.208558 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-cache\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.208582 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75z7t\" (UniqueName: \"kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-kube-api-access-75z7t\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.208633 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: E0226 11:30:29.208819 4699 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 11:30:29 crc kubenswrapper[4699]: E0226 11:30:29.208837 4699 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 11:30:29 crc kubenswrapper[4699]: E0226 11:30:29.208890 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift podName:f23ec57b-7ab1-4152-8108-e0e27b4ba95c nodeName:}" failed. No retries permitted until 2026-02-26 11:30:29.708868601 +0000 UTC m=+1175.519695035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift") pod "swift-storage-0" (UID: "f23ec57b-7ab1-4152-8108-e0e27b4ba95c") : configmap "swift-ring-files" not found Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.209970 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-cache\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.210147 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-lock\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.210209 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.216748 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.242747 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75z7t\" (UniqueName: \"kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-kube-api-access-75z7t\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.254713 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.346488 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.410897 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-dns-svc\") pod \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.411862 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-config\") pod \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.411928 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjqzz\" (UniqueName: \"kubernetes.io/projected/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-kube-api-access-mjqzz\") pod \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.412097 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-sb\") pod \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.412407 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-nb\") pod \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\" (UID: \"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a\") " Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.417856 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-kube-api-access-mjqzz" (OuterVolumeSpecName: "kube-api-access-mjqzz") pod "84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a" (UID: "84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a"). InnerVolumeSpecName "kube-api-access-mjqzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.436430 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a" (UID: "84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.436923 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a" (UID: "84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.439528 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.440516 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-config" (OuterVolumeSpecName: "config") pod "84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a" (UID: "84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.444760 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a" (UID: "84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.514576 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-dns-svc\") pod \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.514871 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-ovsdbserver-nb\") pod \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.515135 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-config\") pod \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.515269 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n9zb\" (UniqueName: \"kubernetes.io/projected/e0f71319-4adc-48a8-82d1-29a8a6bb7500-kube-api-access-8n9zb\") pod \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\" (UID: \"e0f71319-4adc-48a8-82d1-29a8a6bb7500\") " Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.515754 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.515834 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.515902 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.515964 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.516025 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjqzz\" (UniqueName: \"kubernetes.io/projected/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a-kube-api-access-mjqzz\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.518629 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f71319-4adc-48a8-82d1-29a8a6bb7500-kube-api-access-8n9zb" (OuterVolumeSpecName: "kube-api-access-8n9zb") pod "e0f71319-4adc-48a8-82d1-29a8a6bb7500" (UID: "e0f71319-4adc-48a8-82d1-29a8a6bb7500"). InnerVolumeSpecName "kube-api-access-8n9zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.532047 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0f71319-4adc-48a8-82d1-29a8a6bb7500" (UID: "e0f71319-4adc-48a8-82d1-29a8a6bb7500"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.532248 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-config" (OuterVolumeSpecName: "config") pod "e0f71319-4adc-48a8-82d1-29a8a6bb7500" (UID: "e0f71319-4adc-48a8-82d1-29a8a6bb7500"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.532876 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0f71319-4adc-48a8-82d1-29a8a6bb7500" (UID: "e0f71319-4adc-48a8-82d1-29a8a6bb7500"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.617674 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.617708 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.617722 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f71319-4adc-48a8-82d1-29a8a6bb7500-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.617733 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n9zb\" (UniqueName: \"kubernetes.io/projected/e0f71319-4adc-48a8-82d1-29a8a6bb7500-kube-api-access-8n9zb\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.719648 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:29 crc kubenswrapper[4699]: E0226 11:30:29.721012 4699 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 11:30:29 crc kubenswrapper[4699]: E0226 11:30:29.721043 4699 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 11:30:29 crc kubenswrapper[4699]: E0226 11:30:29.721247 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift podName:f23ec57b-7ab1-4152-8108-e0e27b4ba95c nodeName:}" failed. No retries permitted until 2026-02-26 11:30:30.721081902 +0000 UTC m=+1176.531908336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift") pod "swift-storage-0" (UID: "f23ec57b-7ab1-4152-8108-e0e27b4ba95c") : configmap "swift-ring-files" not found Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.908857 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" event={"ID":"e0f71319-4adc-48a8-82d1-29a8a6bb7500","Type":"ContainerDied","Data":"f70b3a342001c7db5b4059fb06e2519c604846c94f573dd9ba11e049e0643348"} Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.908908 4699 scope.go:117] "RemoveContainer" containerID="445ab4d3ee3c89f4634bcfb0a33d6ea9b7825c4b93d1fd1727ebec918c7cc6e0" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.909032 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-87kkf" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.916549 4699 generic.go:334] "Generic (PLEG): container finished" podID="2a166832-199a-436c-85a2-4ccde527f180" containerID="4ad9a83fa9f5197d955a8f1565b66571572dedbb333404d507411352c78978c6" exitCode=0 Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.916680 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" event={"ID":"2a166832-199a-436c-85a2-4ccde527f180","Type":"ContainerDied","Data":"4ad9a83fa9f5197d955a8f1565b66571572dedbb333404d507411352c78978c6"} Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.916716 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" event={"ID":"2a166832-199a-436c-85a2-4ccde527f180","Type":"ContainerStarted","Data":"e37733ce4b3de5c1e636da1d778df1b2746e600646623b6c23cb5510f0a9db33"} Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.924151 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8fbd47d6-02c1-4ac4-a981-231eb0f13530","Type":"ContainerStarted","Data":"931d0ab71ab277a245269ac933b40aa1db31817a206320265f641db79ee1b41b"} Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.935147 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-rx5rp" Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.935205 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-rx5rp" event={"ID":"84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a","Type":"ContainerDied","Data":"3c99596d39f423554ef17ae2aa77a54967d1e95b6f6bf8ba86b76cfbeba577ff"} Feb 26 11:30:29 crc kubenswrapper[4699]: I0226 11:30:29.972034 4699 scope.go:117] "RemoveContainer" containerID="24d9e2bdd993f65b648a62785a8a9bb52bde1911788d2bc3af7f542b158faa63" Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.023329 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-87kkf"] Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.035641 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-87kkf"] Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.058518 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-rx5rp"] Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.063808 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-rx5rp"] Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.269707 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a" path="/var/lib/kubelet/pods/84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a/volumes" Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.270253 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f71319-4adc-48a8-82d1-29a8a6bb7500" path="/var/lib/kubelet/pods/e0f71319-4adc-48a8-82d1-29a8a6bb7500/volumes" Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.737025 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:30 crc kubenswrapper[4699]: E0226 11:30:30.737218 4699 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 11:30:30 crc kubenswrapper[4699]: E0226 11:30:30.737548 4699 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 11:30:30 crc kubenswrapper[4699]: E0226 11:30:30.737613 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift podName:f23ec57b-7ab1-4152-8108-e0e27b4ba95c nodeName:}" failed. No retries permitted until 2026-02-26 11:30:32.737591787 +0000 UTC m=+1178.548418221 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift") pod "swift-storage-0" (UID: "f23ec57b-7ab1-4152-8108-e0e27b4ba95c") : configmap "swift-ring-files" not found Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.943497 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" event={"ID":"2a166832-199a-436c-85a2-4ccde527f180","Type":"ContainerStarted","Data":"a6963bcffe5d258cd49c8f7db7cd1ef0c3f71763a18cb9d09e9e8a608d9fa6bd"} Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.944537 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.946248 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8fbd47d6-02c1-4ac4-a981-231eb0f13530","Type":"ContainerStarted","Data":"31fcfd1e702ad2c30a0ec2023dd323706787df36e45990f279928b76e2e809f0"} Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.946394 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.968998 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" podStartSLOduration=3.968983059 podStartE2EDuration="3.968983059s" podCreationTimestamp="2026-02-26 11:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:30:30.963155161 +0000 UTC m=+1176.773981615" watchObservedRunningTime="2026-02-26 11:30:30.968983059 +0000 UTC m=+1176.779809493" Feb 26 11:30:30 crc kubenswrapper[4699]: I0226 11:30:30.978140 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.230932742 podStartE2EDuration="5.978124554s" podCreationTimestamp="2026-02-26 11:30:25 +0000 UTC" firstStartedPulling="2026-02-26 11:30:25.978676248 +0000 UTC m=+1171.789502682" lastFinishedPulling="2026-02-26 11:30:29.72586806 +0000 UTC m=+1175.536694494" observedRunningTime="2026-02-26 11:30:30.977382352 +0000 UTC m=+1176.788208786" watchObservedRunningTime="2026-02-26 11:30:30.978124554 +0000 UTC m=+1176.788950988" Feb 26 11:30:31 crc kubenswrapper[4699]: E0226 11:30:31.749242 4699 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.213:47266->38.102.83.213:34509: write tcp 38.102.83.213:47266->38.102.83.213:34509: write: broken pipe Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.772621 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:32 crc kubenswrapper[4699]: E0226 11:30:32.772835 4699 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 11:30:32 crc kubenswrapper[4699]: E0226 11:30:32.772881 4699 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 11:30:32 crc kubenswrapper[4699]: E0226 11:30:32.772960 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift podName:f23ec57b-7ab1-4152-8108-e0e27b4ba95c nodeName:}" failed. No retries permitted until 2026-02-26 11:30:36.772937635 +0000 UTC m=+1182.583764069 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift") pod "swift-storage-0" (UID: "f23ec57b-7ab1-4152-8108-e0e27b4ba95c") : configmap "swift-ring-files" not found Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.894183 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-lqqdx"] Feb 26 11:30:32 crc kubenswrapper[4699]: E0226 11:30:32.894526 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f71319-4adc-48a8-82d1-29a8a6bb7500" containerName="init" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.894543 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f71319-4adc-48a8-82d1-29a8a6bb7500" containerName="init" Feb 26 11:30:32 crc kubenswrapper[4699]: E0226 11:30:32.894565 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a" containerName="init" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.894571 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a" containerName="init" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.894728 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a51a2d-7b6c-4a4a-849f-7f02bcbaf87a" containerName="init" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.894743 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f71319-4adc-48a8-82d1-29a8a6bb7500" containerName="init" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.895265 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.897609 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.897626 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.905474 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.913137 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lqqdx"] Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.976001 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-scripts\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.976058 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-swiftconf\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.976148 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6d5r\" (UniqueName: \"kubernetes.io/projected/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-kube-api-access-k6d5r\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.976181 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-ring-data-devices\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.976214 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-dispersionconf\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.976234 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-etc-swift\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:32 crc kubenswrapper[4699]: I0226 11:30:32.976254 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-combined-ca-bundle\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.077638 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-ring-data-devices\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.077730 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-dispersionconf\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.077763 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-etc-swift\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.077805 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-combined-ca-bundle\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.077902 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-scripts\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.077956 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-swiftconf\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.078055 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6d5r\" (UniqueName: \"kubernetes.io/projected/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-kube-api-access-k6d5r\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.078554 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-etc-swift\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.078846 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-scripts\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.079034 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-ring-data-devices\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.088360 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-dispersionconf\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.088496 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-swiftconf\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.088600 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-combined-ca-bundle\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.111871 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6d5r\" (UniqueName: \"kubernetes.io/projected/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-kube-api-access-k6d5r\") pod \"swift-ring-rebalance-lqqdx\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.212151 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.638850 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lqqdx"] Feb 26 11:30:33 crc kubenswrapper[4699]: W0226 11:30:33.643705 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9125ee3a_a0b6_469b_b79d_3a376f2d5d91.slice/crio-a3441a4af4d4bdfc7161195190910ab39fbc98b45c549fbd1135470c64e160b7 WatchSource:0}: Error finding container a3441a4af4d4bdfc7161195190910ab39fbc98b45c549fbd1135470c64e160b7: Status 404 returned error can't find the container with id a3441a4af4d4bdfc7161195190910ab39fbc98b45c549fbd1135470c64e160b7 Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.973206 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lqqdx" event={"ID":"9125ee3a-a0b6-469b-b79d-3a376f2d5d91","Type":"ContainerStarted","Data":"a3441a4af4d4bdfc7161195190910ab39fbc98b45c549fbd1135470c64e160b7"} Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.982541 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 26 11:30:33 crc kubenswrapper[4699]: I0226 11:30:33.983902 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 26 11:30:35 crc kubenswrapper[4699]: I0226 11:30:35.123687 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 26 11:30:35 crc kubenswrapper[4699]: I0226 11:30:35.124955 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 26 11:30:35 crc kubenswrapper[4699]: I0226 11:30:35.193735 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.114021 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.285217 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.393479 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.579008 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8e68-account-create-update-bwkx8"] Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.580226 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8e68-account-create-update-bwkx8" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.582613 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.591050 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8e68-account-create-update-bwkx8"] Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.648266 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6e3aace-8f02-410d-8e7e-4fa61336435b-operator-scripts\") pod \"placement-8e68-account-create-update-bwkx8\" (UID: \"f6e3aace-8f02-410d-8e7e-4fa61336435b\") " pod="openstack/placement-8e68-account-create-update-bwkx8" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.648422 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r55vn\" (UniqueName: \"kubernetes.io/projected/f6e3aace-8f02-410d-8e7e-4fa61336435b-kube-api-access-r55vn\") pod \"placement-8e68-account-create-update-bwkx8\" (UID: \"f6e3aace-8f02-410d-8e7e-4fa61336435b\") " pod="openstack/placement-8e68-account-create-update-bwkx8" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.676824 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-29gg4"] Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.677989 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-29gg4" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.684071 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-29gg4"] Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.751108 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r55vn\" (UniqueName: \"kubernetes.io/projected/f6e3aace-8f02-410d-8e7e-4fa61336435b-kube-api-access-r55vn\") pod \"placement-8e68-account-create-update-bwkx8\" (UID: \"f6e3aace-8f02-410d-8e7e-4fa61336435b\") " pod="openstack/placement-8e68-account-create-update-bwkx8" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.751418 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6e3aace-8f02-410d-8e7e-4fa61336435b-operator-scripts\") pod \"placement-8e68-account-create-update-bwkx8\" (UID: \"f6e3aace-8f02-410d-8e7e-4fa61336435b\") " pod="openstack/placement-8e68-account-create-update-bwkx8" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.751511 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz95n\" (UniqueName: \"kubernetes.io/projected/e9392947-cd31-4afd-92c7-73bac0d4cbd3-kube-api-access-kz95n\") pod \"placement-db-create-29gg4\" (UID: \"e9392947-cd31-4afd-92c7-73bac0d4cbd3\") " pod="openstack/placement-db-create-29gg4" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.751611 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9392947-cd31-4afd-92c7-73bac0d4cbd3-operator-scripts\") pod \"placement-db-create-29gg4\" (UID: \"e9392947-cd31-4afd-92c7-73bac0d4cbd3\") " pod="openstack/placement-db-create-29gg4" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.752300 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6e3aace-8f02-410d-8e7e-4fa61336435b-operator-scripts\") pod \"placement-8e68-account-create-update-bwkx8\" (UID: \"f6e3aace-8f02-410d-8e7e-4fa61336435b\") " pod="openstack/placement-8e68-account-create-update-bwkx8" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.790164 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r55vn\" (UniqueName: \"kubernetes.io/projected/f6e3aace-8f02-410d-8e7e-4fa61336435b-kube-api-access-r55vn\") pod \"placement-8e68-account-create-update-bwkx8\" (UID: \"f6e3aace-8f02-410d-8e7e-4fa61336435b\") " pod="openstack/placement-8e68-account-create-update-bwkx8" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.853621 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9392947-cd31-4afd-92c7-73bac0d4cbd3-operator-scripts\") pod \"placement-db-create-29gg4\" (UID: \"e9392947-cd31-4afd-92c7-73bac0d4cbd3\") " pod="openstack/placement-db-create-29gg4" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.853768 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.853845 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz95n\" (UniqueName: \"kubernetes.io/projected/e9392947-cd31-4afd-92c7-73bac0d4cbd3-kube-api-access-kz95n\") pod \"placement-db-create-29gg4\" (UID: \"e9392947-cd31-4afd-92c7-73bac0d4cbd3\") " pod="openstack/placement-db-create-29gg4" Feb 26 11:30:36 crc kubenswrapper[4699]: E0226 11:30:36.854020 4699 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 11:30:36 crc kubenswrapper[4699]: E0226 11:30:36.854049 4699 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 11:30:36 crc kubenswrapper[4699]: E0226 11:30:36.854101 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift podName:f23ec57b-7ab1-4152-8108-e0e27b4ba95c nodeName:}" failed. No retries permitted until 2026-02-26 11:30:44.854084312 +0000 UTC m=+1190.664910746 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift") pod "swift-storage-0" (UID: "f23ec57b-7ab1-4152-8108-e0e27b4ba95c") : configmap "swift-ring-files" not found Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.855025 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9392947-cd31-4afd-92c7-73bac0d4cbd3-operator-scripts\") pod \"placement-db-create-29gg4\" (UID: \"e9392947-cd31-4afd-92c7-73bac0d4cbd3\") " pod="openstack/placement-db-create-29gg4" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.889700 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz95n\" (UniqueName: \"kubernetes.io/projected/e9392947-cd31-4afd-92c7-73bac0d4cbd3-kube-api-access-kz95n\") pod \"placement-db-create-29gg4\" (UID: \"e9392947-cd31-4afd-92c7-73bac0d4cbd3\") " pod="openstack/placement-db-create-29gg4" Feb 26 11:30:36 crc kubenswrapper[4699]: I0226 11:30:36.900486 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8e68-account-create-update-bwkx8" Feb 26 11:30:37 crc kubenswrapper[4699]: I0226 11:30:37.000152 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-29gg4" Feb 26 11:30:37 crc kubenswrapper[4699]: W0226 11:30:37.499392 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e3aace_8f02_410d_8e7e_4fa61336435b.slice/crio-b596e01a17243fbf4997c4d2c8dfee292b581ac3cf78de67257c3b09a33dffa3 WatchSource:0}: Error finding container b596e01a17243fbf4997c4d2c8dfee292b581ac3cf78de67257c3b09a33dffa3: Status 404 returned error can't find the container with id b596e01a17243fbf4997c4d2c8dfee292b581ac3cf78de67257c3b09a33dffa3 Feb 26 11:30:37 crc kubenswrapper[4699]: I0226 11:30:37.500581 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8e68-account-create-update-bwkx8"] Feb 26 11:30:37 crc kubenswrapper[4699]: I0226 11:30:37.589702 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-29gg4"] Feb 26 11:30:37 crc kubenswrapper[4699]: W0226 11:30:37.591571 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9392947_cd31_4afd_92c7_73bac0d4cbd3.slice/crio-b278a38ee63ea38861814367d188a15a9a6283a23e63740d2fe8992f0f5bf2f7 WatchSource:0}: Error finding container b278a38ee63ea38861814367d188a15a9a6283a23e63740d2fe8992f0f5bf2f7: Status 404 returned error can't find the container with id b278a38ee63ea38861814367d188a15a9a6283a23e63740d2fe8992f0f5bf2f7 Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.003627 4699 generic.go:334] "Generic (PLEG): container finished" podID="f6e3aace-8f02-410d-8e7e-4fa61336435b" containerID="6bf24901f54aea8222e7ac0b7dea606ea0a09d83f0dad7544b8e7bc98249b1e8" exitCode=0 Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.004018 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8e68-account-create-update-bwkx8" event={"ID":"f6e3aace-8f02-410d-8e7e-4fa61336435b","Type":"ContainerDied","Data":"6bf24901f54aea8222e7ac0b7dea606ea0a09d83f0dad7544b8e7bc98249b1e8"} Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.004044 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8e68-account-create-update-bwkx8" event={"ID":"f6e3aace-8f02-410d-8e7e-4fa61336435b","Type":"ContainerStarted","Data":"b596e01a17243fbf4997c4d2c8dfee292b581ac3cf78de67257c3b09a33dffa3"} Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.005383 4699 generic.go:334] "Generic (PLEG): container finished" podID="e9392947-cd31-4afd-92c7-73bac0d4cbd3" containerID="02517dfaa484539c60d2ef72e32d7a113f0b9a11e109ec31ac01691b7f015d05" exitCode=0 Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.005434 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-29gg4" event={"ID":"e9392947-cd31-4afd-92c7-73bac0d4cbd3","Type":"ContainerDied","Data":"02517dfaa484539c60d2ef72e32d7a113f0b9a11e109ec31ac01691b7f015d05"} Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.005449 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-29gg4" event={"ID":"e9392947-cd31-4afd-92c7-73bac0d4cbd3","Type":"ContainerStarted","Data":"b278a38ee63ea38861814367d188a15a9a6283a23e63740d2fe8992f0f5bf2f7"} Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.006707 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lqqdx" event={"ID":"9125ee3a-a0b6-469b-b79d-3a376f2d5d91","Type":"ContainerStarted","Data":"11bb20834c3902f477ab036d4f74aa6b8faa916aaeb98d82af08a9d084ddec28"} Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.054508 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-lqqdx" podStartSLOduration=2.476964355 podStartE2EDuration="6.054487218s" podCreationTimestamp="2026-02-26 11:30:32 +0000 UTC" firstStartedPulling="2026-02-26 11:30:33.645806792 +0000 UTC m=+1179.456633226" lastFinishedPulling="2026-02-26 11:30:37.223329655 +0000 UTC m=+1183.034156089" observedRunningTime="2026-02-26 11:30:38.047994941 +0000 UTC m=+1183.858821395" watchObservedRunningTime="2026-02-26 11:30:38.054487218 +0000 UTC m=+1183.865313652" Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.152251 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.220793 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mwnwv"] Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.221046 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" podUID="13838b5f-5f0e-44ba-8b63-97b4e20efbce" containerName="dnsmasq-dns" containerID="cri-o://66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800" gracePeriod=10 Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.703007 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.786983 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-config\") pod \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.787074 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx227\" (UniqueName: \"kubernetes.io/projected/13838b5f-5f0e-44ba-8b63-97b4e20efbce-kube-api-access-vx227\") pod \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.787281 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-dns-svc\") pod \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\" (UID: \"13838b5f-5f0e-44ba-8b63-97b4e20efbce\") " Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.793244 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13838b5f-5f0e-44ba-8b63-97b4e20efbce-kube-api-access-vx227" (OuterVolumeSpecName: "kube-api-access-vx227") pod "13838b5f-5f0e-44ba-8b63-97b4e20efbce" (UID: "13838b5f-5f0e-44ba-8b63-97b4e20efbce"). InnerVolumeSpecName "kube-api-access-vx227". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.828274 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-config" (OuterVolumeSpecName: "config") pod "13838b5f-5f0e-44ba-8b63-97b4e20efbce" (UID: "13838b5f-5f0e-44ba-8b63-97b4e20efbce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.829292 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13838b5f-5f0e-44ba-8b63-97b4e20efbce" (UID: "13838b5f-5f0e-44ba-8b63-97b4e20efbce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.889668 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx227\" (UniqueName: \"kubernetes.io/projected/13838b5f-5f0e-44ba-8b63-97b4e20efbce-kube-api-access-vx227\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.889720 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:38 crc kubenswrapper[4699]: I0226 11:30:38.889733 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13838b5f-5f0e-44ba-8b63-97b4e20efbce-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.014506 4699 generic.go:334] "Generic (PLEG): container finished" podID="13838b5f-5f0e-44ba-8b63-97b4e20efbce" containerID="66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800" exitCode=0 Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.014588 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.014612 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" event={"ID":"13838b5f-5f0e-44ba-8b63-97b4e20efbce","Type":"ContainerDied","Data":"66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800"} Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.014655 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mwnwv" event={"ID":"13838b5f-5f0e-44ba-8b63-97b4e20efbce","Type":"ContainerDied","Data":"30da7116ef227de51d61b599067ff253e3fbcd27cb9bf2e3d4c83d06e5a7374a"} Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.014674 4699 scope.go:117] "RemoveContainer" containerID="66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.057983 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mwnwv"] Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.072298 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mwnwv"] Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.086908 4699 scope.go:117] "RemoveContainer" containerID="d9b6954a7b897d0e33bc9da8a8d509e007749e72356e89501274b2dcc58768fe" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.148830 4699 scope.go:117] "RemoveContainer" containerID="66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800" Feb 26 11:30:39 crc kubenswrapper[4699]: E0226 11:30:39.152230 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800\": container with ID starting with 66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800 not found: ID does not exist" containerID="66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.152273 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800"} err="failed to get container status \"66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800\": rpc error: code = NotFound desc = could not find container \"66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800\": container with ID starting with 66f36dd399631c094ac2a9c045c8b98455818a36d07ce2ecb6290645636b2800 not found: ID does not exist" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.152297 4699 scope.go:117] "RemoveContainer" containerID="d9b6954a7b897d0e33bc9da8a8d509e007749e72356e89501274b2dcc58768fe" Feb 26 11:30:39 crc kubenswrapper[4699]: E0226 11:30:39.154357 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9b6954a7b897d0e33bc9da8a8d509e007749e72356e89501274b2dcc58768fe\": container with ID starting with d9b6954a7b897d0e33bc9da8a8d509e007749e72356e89501274b2dcc58768fe not found: ID does not exist" containerID="d9b6954a7b897d0e33bc9da8a8d509e007749e72356e89501274b2dcc58768fe" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.154396 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b6954a7b897d0e33bc9da8a8d509e007749e72356e89501274b2dcc58768fe"} err="failed to get container status \"d9b6954a7b897d0e33bc9da8a8d509e007749e72356e89501274b2dcc58768fe\": rpc error: code = NotFound desc = could not find container \"d9b6954a7b897d0e33bc9da8a8d509e007749e72356e89501274b2dcc58768fe\": container with ID starting with d9b6954a7b897d0e33bc9da8a8d509e007749e72356e89501274b2dcc58768fe not found: ID does not exist" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.469762 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-29gg4" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.483092 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8e68-account-create-update-bwkx8" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.602736 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz95n\" (UniqueName: \"kubernetes.io/projected/e9392947-cd31-4afd-92c7-73bac0d4cbd3-kube-api-access-kz95n\") pod \"e9392947-cd31-4afd-92c7-73bac0d4cbd3\" (UID: \"e9392947-cd31-4afd-92c7-73bac0d4cbd3\") " Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.603203 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6e3aace-8f02-410d-8e7e-4fa61336435b-operator-scripts\") pod \"f6e3aace-8f02-410d-8e7e-4fa61336435b\" (UID: \"f6e3aace-8f02-410d-8e7e-4fa61336435b\") " Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.603234 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9392947-cd31-4afd-92c7-73bac0d4cbd3-operator-scripts\") pod \"e9392947-cd31-4afd-92c7-73bac0d4cbd3\" (UID: \"e9392947-cd31-4afd-92c7-73bac0d4cbd3\") " Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.603443 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r55vn\" (UniqueName: \"kubernetes.io/projected/f6e3aace-8f02-410d-8e7e-4fa61336435b-kube-api-access-r55vn\") pod \"f6e3aace-8f02-410d-8e7e-4fa61336435b\" (UID: \"f6e3aace-8f02-410d-8e7e-4fa61336435b\") " Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.603767 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6e3aace-8f02-410d-8e7e-4fa61336435b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6e3aace-8f02-410d-8e7e-4fa61336435b" (UID: "f6e3aace-8f02-410d-8e7e-4fa61336435b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.604150 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9392947-cd31-4afd-92c7-73bac0d4cbd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9392947-cd31-4afd-92c7-73bac0d4cbd3" (UID: "e9392947-cd31-4afd-92c7-73bac0d4cbd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.607970 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9392947-cd31-4afd-92c7-73bac0d4cbd3-kube-api-access-kz95n" (OuterVolumeSpecName: "kube-api-access-kz95n") pod "e9392947-cd31-4afd-92c7-73bac0d4cbd3" (UID: "e9392947-cd31-4afd-92c7-73bac0d4cbd3"). InnerVolumeSpecName "kube-api-access-kz95n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.608012 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e3aace-8f02-410d-8e7e-4fa61336435b-kube-api-access-r55vn" (OuterVolumeSpecName: "kube-api-access-r55vn") pod "f6e3aace-8f02-410d-8e7e-4fa61336435b" (UID: "f6e3aace-8f02-410d-8e7e-4fa61336435b"). InnerVolumeSpecName "kube-api-access-r55vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.705380 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r55vn\" (UniqueName: \"kubernetes.io/projected/f6e3aace-8f02-410d-8e7e-4fa61336435b-kube-api-access-r55vn\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.705409 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz95n\" (UniqueName: \"kubernetes.io/projected/e9392947-cd31-4afd-92c7-73bac0d4cbd3-kube-api-access-kz95n\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.705420 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6e3aace-8f02-410d-8e7e-4fa61336435b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:39 crc kubenswrapper[4699]: I0226 11:30:39.705428 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9392947-cd31-4afd-92c7-73bac0d4cbd3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:40 crc kubenswrapper[4699]: I0226 11:30:40.024014 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8e68-account-create-update-bwkx8" event={"ID":"f6e3aace-8f02-410d-8e7e-4fa61336435b","Type":"ContainerDied","Data":"b596e01a17243fbf4997c4d2c8dfee292b581ac3cf78de67257c3b09a33dffa3"} Feb 26 11:30:40 crc kubenswrapper[4699]: I0226 11:30:40.024066 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b596e01a17243fbf4997c4d2c8dfee292b581ac3cf78de67257c3b09a33dffa3" Feb 26 11:30:40 crc kubenswrapper[4699]: I0226 11:30:40.024154 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8e68-account-create-update-bwkx8" Feb 26 11:30:40 crc kubenswrapper[4699]: I0226 11:30:40.032191 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-29gg4" event={"ID":"e9392947-cd31-4afd-92c7-73bac0d4cbd3","Type":"ContainerDied","Data":"b278a38ee63ea38861814367d188a15a9a6283a23e63740d2fe8992f0f5bf2f7"} Feb 26 11:30:40 crc kubenswrapper[4699]: I0226 11:30:40.032230 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b278a38ee63ea38861814367d188a15a9a6283a23e63740d2fe8992f0f5bf2f7" Feb 26 11:30:40 crc kubenswrapper[4699]: I0226 11:30:40.032364 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-29gg4" Feb 26 11:30:40 crc kubenswrapper[4699]: I0226 11:30:40.271368 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13838b5f-5f0e-44ba-8b63-97b4e20efbce" path="/var/lib/kubelet/pods/13838b5f-5f0e-44ba-8b63-97b4e20efbce/volumes" Feb 26 11:30:41 crc kubenswrapper[4699]: I0226 11:30:41.585490 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:30:41 crc kubenswrapper[4699]: I0226 11:30:41.585781 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.347182 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-47sx2"] Feb 26 11:30:42 crc kubenswrapper[4699]: E0226 11:30:42.347563 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13838b5f-5f0e-44ba-8b63-97b4e20efbce" containerName="dnsmasq-dns" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.347602 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="13838b5f-5f0e-44ba-8b63-97b4e20efbce" containerName="dnsmasq-dns" Feb 26 11:30:42 crc kubenswrapper[4699]: E0226 11:30:42.347614 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e3aace-8f02-410d-8e7e-4fa61336435b" containerName="mariadb-account-create-update" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.347621 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e3aace-8f02-410d-8e7e-4fa61336435b" containerName="mariadb-account-create-update" Feb 26 11:30:42 crc kubenswrapper[4699]: E0226 11:30:42.347635 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9392947-cd31-4afd-92c7-73bac0d4cbd3" containerName="mariadb-database-create" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.347643 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9392947-cd31-4afd-92c7-73bac0d4cbd3" containerName="mariadb-database-create" Feb 26 11:30:42 crc kubenswrapper[4699]: E0226 11:30:42.347668 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13838b5f-5f0e-44ba-8b63-97b4e20efbce" containerName="init" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.347678 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="13838b5f-5f0e-44ba-8b63-97b4e20efbce" containerName="init" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.347873 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="13838b5f-5f0e-44ba-8b63-97b4e20efbce" containerName="dnsmasq-dns" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.347888 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9392947-cd31-4afd-92c7-73bac0d4cbd3" containerName="mariadb-database-create" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.347902 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e3aace-8f02-410d-8e7e-4fa61336435b" containerName="mariadb-account-create-update" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.348526 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47sx2" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.351673 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.358995 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-47sx2"] Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.450821 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9stw5\" (UniqueName: \"kubernetes.io/projected/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-kube-api-access-9stw5\") pod \"root-account-create-update-47sx2\" (UID: \"acabaa2a-471d-49a2-9e75-b5c1a8eb590e\") " pod="openstack/root-account-create-update-47sx2" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.450969 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-operator-scripts\") pod \"root-account-create-update-47sx2\" (UID: \"acabaa2a-471d-49a2-9e75-b5c1a8eb590e\") " pod="openstack/root-account-create-update-47sx2" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.552952 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9stw5\" (UniqueName: \"kubernetes.io/projected/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-kube-api-access-9stw5\") pod \"root-account-create-update-47sx2\" (UID: \"acabaa2a-471d-49a2-9e75-b5c1a8eb590e\") " pod="openstack/root-account-create-update-47sx2" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.553064 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-operator-scripts\") pod \"root-account-create-update-47sx2\" (UID: \"acabaa2a-471d-49a2-9e75-b5c1a8eb590e\") " pod="openstack/root-account-create-update-47sx2" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.553880 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-operator-scripts\") pod \"root-account-create-update-47sx2\" (UID: \"acabaa2a-471d-49a2-9e75-b5c1a8eb590e\") " pod="openstack/root-account-create-update-47sx2" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.570155 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9stw5\" (UniqueName: \"kubernetes.io/projected/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-kube-api-access-9stw5\") pod \"root-account-create-update-47sx2\" (UID: \"acabaa2a-471d-49a2-9e75-b5c1a8eb590e\") " pod="openstack/root-account-create-update-47sx2" Feb 26 11:30:42 crc kubenswrapper[4699]: I0226 11:30:42.669868 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47sx2" Feb 26 11:30:43 crc kubenswrapper[4699]: I0226 11:30:43.034828 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-47sx2"] Feb 26 11:30:43 crc kubenswrapper[4699]: I0226 11:30:43.060933 4699 generic.go:334] "Generic (PLEG): container finished" podID="2d57084d-dc87-44e4-bbc8-50c402b7165b" containerID="4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629" exitCode=0 Feb 26 11:30:43 crc kubenswrapper[4699]: I0226 11:30:43.061007 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d57084d-dc87-44e4-bbc8-50c402b7165b","Type":"ContainerDied","Data":"4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629"} Feb 26 11:30:43 crc kubenswrapper[4699]: I0226 11:30:43.064868 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4a652b4-5b96-4ebf-81b4-df92846455bd" containerID="01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f" exitCode=0 Feb 26 11:30:43 crc kubenswrapper[4699]: I0226 11:30:43.064921 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f4a652b4-5b96-4ebf-81b4-df92846455bd","Type":"ContainerDied","Data":"01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f"} Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.073823 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f4a652b4-5b96-4ebf-81b4-df92846455bd","Type":"ContainerStarted","Data":"5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7"} Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.074405 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.082342 4699 generic.go:334] "Generic (PLEG): container finished" podID="acabaa2a-471d-49a2-9e75-b5c1a8eb590e" containerID="3fc8431c0d9189816a6d87bbbf1bde79cfcb29458f69200822c417c75941073b" exitCode=0 Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.082397 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-47sx2" event={"ID":"acabaa2a-471d-49a2-9e75-b5c1a8eb590e","Type":"ContainerDied","Data":"3fc8431c0d9189816a6d87bbbf1bde79cfcb29458f69200822c417c75941073b"} Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.082422 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-47sx2" event={"ID":"acabaa2a-471d-49a2-9e75-b5c1a8eb590e","Type":"ContainerStarted","Data":"7a667b210df76a1b9615f469b7a212c9e1abf68463fdab080abb7ecfefcc2f05"} Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.083884 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d57084d-dc87-44e4-bbc8-50c402b7165b","Type":"ContainerStarted","Data":"34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625"} Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.084631 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.100396 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=47.336488712 podStartE2EDuration="54.100379684s" podCreationTimestamp="2026-02-26 11:29:50 +0000 UTC" firstStartedPulling="2026-02-26 11:30:02.536307278 +0000 UTC m=+1148.347133712" lastFinishedPulling="2026-02-26 11:30:09.30019825 +0000 UTC m=+1155.111024684" observedRunningTime="2026-02-26 11:30:44.097652515 +0000 UTC m=+1189.908478959" watchObservedRunningTime="2026-02-26 11:30:44.100379684 +0000 UTC m=+1189.911206118" Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.147976 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.468588836 podStartE2EDuration="54.147956908s" podCreationTimestamp="2026-02-26 11:29:50 +0000 UTC" firstStartedPulling="2026-02-26 11:29:58.583180904 +0000 UTC m=+1144.394007338" lastFinishedPulling="2026-02-26 11:30:09.262548976 +0000 UTC m=+1155.073375410" observedRunningTime="2026-02-26 11:30:44.143556381 +0000 UTC m=+1189.954382835" watchObservedRunningTime="2026-02-26 11:30:44.147956908 +0000 UTC m=+1189.958783352" Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.901698 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.907654 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f23ec57b-7ab1-4152-8108-e0e27b4ba95c-etc-swift\") pod \"swift-storage-0\" (UID: \"f23ec57b-7ab1-4152-8108-e0e27b4ba95c\") " pod="openstack/swift-storage-0" Feb 26 11:30:44 crc kubenswrapper[4699]: I0226 11:30:44.954042 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.091645 4699 generic.go:334] "Generic (PLEG): container finished" podID="9125ee3a-a0b6-469b-b79d-3a376f2d5d91" containerID="11bb20834c3902f477ab036d4f74aa6b8faa916aaeb98d82af08a9d084ddec28" exitCode=0 Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.091789 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lqqdx" event={"ID":"9125ee3a-a0b6-469b-b79d-3a376f2d5d91","Type":"ContainerDied","Data":"11bb20834c3902f477ab036d4f74aa6b8faa916aaeb98d82af08a9d084ddec28"} Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.416748 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-nhpn8"] Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.418519 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nhpn8" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.424267 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nhpn8"] Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.511146 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzxwx\" (UniqueName: \"kubernetes.io/projected/0e74821a-c4e5-4812-829d-c6b60b6657b8-kube-api-access-fzxwx\") pod \"glance-db-create-nhpn8\" (UID: \"0e74821a-c4e5-4812-829d-c6b60b6657b8\") " pod="openstack/glance-db-create-nhpn8" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.511221 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74821a-c4e5-4812-829d-c6b60b6657b8-operator-scripts\") pod \"glance-db-create-nhpn8\" (UID: \"0e74821a-c4e5-4812-829d-c6b60b6657b8\") " pod="openstack/glance-db-create-nhpn8" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.526598 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47sx2" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.549389 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f9e8-account-create-update-zqq4d"] Feb 26 11:30:45 crc kubenswrapper[4699]: E0226 11:30:45.549805 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acabaa2a-471d-49a2-9e75-b5c1a8eb590e" containerName="mariadb-account-create-update" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.549823 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="acabaa2a-471d-49a2-9e75-b5c1a8eb590e" containerName="mariadb-account-create-update" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.550004 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="acabaa2a-471d-49a2-9e75-b5c1a8eb590e" containerName="mariadb-account-create-update" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.550621 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f9e8-account-create-update-zqq4d" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.552956 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.569331 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f9e8-account-create-update-zqq4d"] Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.617212 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9stw5\" (UniqueName: \"kubernetes.io/projected/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-kube-api-access-9stw5\") pod \"acabaa2a-471d-49a2-9e75-b5c1a8eb590e\" (UID: \"acabaa2a-471d-49a2-9e75-b5c1a8eb590e\") " Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.619949 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "acabaa2a-471d-49a2-9e75-b5c1a8eb590e" (UID: "acabaa2a-471d-49a2-9e75-b5c1a8eb590e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.618045 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-operator-scripts\") pod \"acabaa2a-471d-49a2-9e75-b5c1a8eb590e\" (UID: \"acabaa2a-471d-49a2-9e75-b5c1a8eb590e\") " Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.620350 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74821a-c4e5-4812-829d-c6b60b6657b8-operator-scripts\") pod \"glance-db-create-nhpn8\" (UID: \"0e74821a-c4e5-4812-829d-c6b60b6657b8\") " pod="openstack/glance-db-create-nhpn8" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.621023 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74821a-c4e5-4812-829d-c6b60b6657b8-operator-scripts\") pod \"glance-db-create-nhpn8\" (UID: \"0e74821a-c4e5-4812-829d-c6b60b6657b8\") " pod="openstack/glance-db-create-nhpn8" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.621162 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c209748-0c47-4bbb-883b-f4c245b6a156-operator-scripts\") pod \"glance-f9e8-account-create-update-zqq4d\" (UID: \"9c209748-0c47-4bbb-883b-f4c245b6a156\") " pod="openstack/glance-f9e8-account-create-update-zqq4d" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.623823 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-kube-api-access-9stw5" (OuterVolumeSpecName: "kube-api-access-9stw5") pod "acabaa2a-471d-49a2-9e75-b5c1a8eb590e" (UID: "acabaa2a-471d-49a2-9e75-b5c1a8eb590e"). InnerVolumeSpecName "kube-api-access-9stw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.624673 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f7fx\" (UniqueName: \"kubernetes.io/projected/9c209748-0c47-4bbb-883b-f4c245b6a156-kube-api-access-2f7fx\") pod \"glance-f9e8-account-create-update-zqq4d\" (UID: \"9c209748-0c47-4bbb-883b-f4c245b6a156\") " pod="openstack/glance-f9e8-account-create-update-zqq4d" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.625302 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzxwx\" (UniqueName: \"kubernetes.io/projected/0e74821a-c4e5-4812-829d-c6b60b6657b8-kube-api-access-fzxwx\") pod \"glance-db-create-nhpn8\" (UID: \"0e74821a-c4e5-4812-829d-c6b60b6657b8\") " pod="openstack/glance-db-create-nhpn8" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.625779 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.625794 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9stw5\" (UniqueName: \"kubernetes.io/projected/acabaa2a-471d-49a2-9e75-b5c1a8eb590e-kube-api-access-9stw5\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.628653 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.648077 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzxwx\" (UniqueName: \"kubernetes.io/projected/0e74821a-c4e5-4812-829d-c6b60b6657b8-kube-api-access-fzxwx\") pod \"glance-db-create-nhpn8\" (UID: \"0e74821a-c4e5-4812-829d-c6b60b6657b8\") " pod="openstack/glance-db-create-nhpn8" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.713006 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.729442 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c209748-0c47-4bbb-883b-f4c245b6a156-operator-scripts\") pod \"glance-f9e8-account-create-update-zqq4d\" (UID: \"9c209748-0c47-4bbb-883b-f4c245b6a156\") " pod="openstack/glance-f9e8-account-create-update-zqq4d" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.729495 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f7fx\" (UniqueName: \"kubernetes.io/projected/9c209748-0c47-4bbb-883b-f4c245b6a156-kube-api-access-2f7fx\") pod \"glance-f9e8-account-create-update-zqq4d\" (UID: \"9c209748-0c47-4bbb-883b-f4c245b6a156\") " pod="openstack/glance-f9e8-account-create-update-zqq4d" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.731063 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c209748-0c47-4bbb-883b-f4c245b6a156-operator-scripts\") pod \"glance-f9e8-account-create-update-zqq4d\" (UID: \"9c209748-0c47-4bbb-883b-f4c245b6a156\") " pod="openstack/glance-f9e8-account-create-update-zqq4d" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.740474 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nhpn8" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.750450 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f7fx\" (UniqueName: \"kubernetes.io/projected/9c209748-0c47-4bbb-883b-f4c245b6a156-kube-api-access-2f7fx\") pod \"glance-f9e8-account-create-update-zqq4d\" (UID: \"9c209748-0c47-4bbb-883b-f4c245b6a156\") " pod="openstack/glance-f9e8-account-create-update-zqq4d" Feb 26 11:30:45 crc kubenswrapper[4699]: I0226 11:30:45.867635 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f9e8-account-create-update-zqq4d" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.102911 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"3586cfc7a59a5c437d5417c4ce50a7a439a961c8aa77b8f836d57d0a464bd67f"} Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.104523 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47sx2" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.108385 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-47sx2" event={"ID":"acabaa2a-471d-49a2-9e75-b5c1a8eb590e","Type":"ContainerDied","Data":"7a667b210df76a1b9615f469b7a212c9e1abf68463fdab080abb7ecfefcc2f05"} Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.108438 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a667b210df76a1b9615f469b7a212c9e1abf68463fdab080abb7ecfefcc2f05" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.256468 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nhpn8"] Feb 26 11:30:46 crc kubenswrapper[4699]: W0226 11:30:46.290270 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e74821a_c4e5_4812_829d_c6b60b6657b8.slice/crio-1e0f55984a322052ab8a31e913928c3a3606b3448556d7abf749cfd0762d8811 WatchSource:0}: Error finding container 1e0f55984a322052ab8a31e913928c3a3606b3448556d7abf749cfd0762d8811: Status 404 returned error can't find the container with id 1e0f55984a322052ab8a31e913928c3a3606b3448556d7abf749cfd0762d8811 Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.384641 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-htqpz"] Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.387168 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-htqpz" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.393699 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-htqpz"] Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.404515 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f9e8-account-create-update-zqq4d"] Feb 26 11:30:46 crc kubenswrapper[4699]: W0226 11:30:46.405410 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c209748_0c47_4bbb_883b_f4c245b6a156.slice/crio-4462b064acce42a7acadd47c88ba90a846e89e85ca3f9f837ec64f18e4763308 WatchSource:0}: Error finding container 4462b064acce42a7acadd47c88ba90a846e89e85ca3f9f837ec64f18e4763308: Status 404 returned error can't find the container with id 4462b064acce42a7acadd47c88ba90a846e89e85ca3f9f837ec64f18e4763308 Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.448258 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54x8m\" (UniqueName: \"kubernetes.io/projected/64b0134d-d882-4622-86a4-ab8172ee4fb2-kube-api-access-54x8m\") pod \"keystone-db-create-htqpz\" (UID: \"64b0134d-d882-4622-86a4-ab8172ee4fb2\") " pod="openstack/keystone-db-create-htqpz" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.448320 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b0134d-d882-4622-86a4-ab8172ee4fb2-operator-scripts\") pod \"keystone-db-create-htqpz\" (UID: \"64b0134d-d882-4622-86a4-ab8172ee4fb2\") " pod="openstack/keystone-db-create-htqpz" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.472076 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0fa1-account-create-update-l7dhx"] Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.474328 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0fa1-account-create-update-l7dhx" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.479762 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.486488 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0fa1-account-create-update-l7dhx"] Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.554003 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d08e57-ba28-4614-8b11-2bd1bd4f836f-operator-scripts\") pod \"keystone-0fa1-account-create-update-l7dhx\" (UID: \"22d08e57-ba28-4614-8b11-2bd1bd4f836f\") " pod="openstack/keystone-0fa1-account-create-update-l7dhx" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.554410 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xrmj\" (UniqueName: \"kubernetes.io/projected/22d08e57-ba28-4614-8b11-2bd1bd4f836f-kube-api-access-8xrmj\") pod \"keystone-0fa1-account-create-update-l7dhx\" (UID: \"22d08e57-ba28-4614-8b11-2bd1bd4f836f\") " pod="openstack/keystone-0fa1-account-create-update-l7dhx" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.554576 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54x8m\" (UniqueName: \"kubernetes.io/projected/64b0134d-d882-4622-86a4-ab8172ee4fb2-kube-api-access-54x8m\") pod \"keystone-db-create-htqpz\" (UID: \"64b0134d-d882-4622-86a4-ab8172ee4fb2\") " pod="openstack/keystone-db-create-htqpz" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.554697 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b0134d-d882-4622-86a4-ab8172ee4fb2-operator-scripts\") pod \"keystone-db-create-htqpz\" (UID: \"64b0134d-d882-4622-86a4-ab8172ee4fb2\") " pod="openstack/keystone-db-create-htqpz" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.555569 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b0134d-d882-4622-86a4-ab8172ee4fb2-operator-scripts\") pod \"keystone-db-create-htqpz\" (UID: \"64b0134d-d882-4622-86a4-ab8172ee4fb2\") " pod="openstack/keystone-db-create-htqpz" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.582577 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54x8m\" (UniqueName: \"kubernetes.io/projected/64b0134d-d882-4622-86a4-ab8172ee4fb2-kube-api-access-54x8m\") pod \"keystone-db-create-htqpz\" (UID: \"64b0134d-d882-4622-86a4-ab8172ee4fb2\") " pod="openstack/keystone-db-create-htqpz" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.585904 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.655824 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6d5r\" (UniqueName: \"kubernetes.io/projected/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-kube-api-access-k6d5r\") pod \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.655938 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-etc-swift\") pod \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.655979 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-scripts\") pod \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.656041 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-swiftconf\") pod \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.656543 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-ring-data-devices\") pod \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.657265 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9125ee3a-a0b6-469b-b79d-3a376f2d5d91" (UID: "9125ee3a-a0b6-469b-b79d-3a376f2d5d91"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.657361 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-combined-ca-bundle\") pod \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.657450 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-dispersionconf\") pod \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\" (UID: \"9125ee3a-a0b6-469b-b79d-3a376f2d5d91\") " Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.657592 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9125ee3a-a0b6-469b-b79d-3a376f2d5d91" (UID: "9125ee3a-a0b6-469b-b79d-3a376f2d5d91"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.658063 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xrmj\" (UniqueName: \"kubernetes.io/projected/22d08e57-ba28-4614-8b11-2bd1bd4f836f-kube-api-access-8xrmj\") pod \"keystone-0fa1-account-create-update-l7dhx\" (UID: \"22d08e57-ba28-4614-8b11-2bd1bd4f836f\") " pod="openstack/keystone-0fa1-account-create-update-l7dhx" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.658288 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d08e57-ba28-4614-8b11-2bd1bd4f836f-operator-scripts\") pod \"keystone-0fa1-account-create-update-l7dhx\" (UID: \"22d08e57-ba28-4614-8b11-2bd1bd4f836f\") " pod="openstack/keystone-0fa1-account-create-update-l7dhx" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.658341 4699 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.658366 4699 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.659139 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d08e57-ba28-4614-8b11-2bd1bd4f836f-operator-scripts\") pod \"keystone-0fa1-account-create-update-l7dhx\" (UID: \"22d08e57-ba28-4614-8b11-2bd1bd4f836f\") " pod="openstack/keystone-0fa1-account-create-update-l7dhx" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.660867 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-kube-api-access-k6d5r" (OuterVolumeSpecName: "kube-api-access-k6d5r") pod "9125ee3a-a0b6-469b-b79d-3a376f2d5d91" (UID: "9125ee3a-a0b6-469b-b79d-3a376f2d5d91"). InnerVolumeSpecName "kube-api-access-k6d5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.668753 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9125ee3a-a0b6-469b-b79d-3a376f2d5d91" (UID: "9125ee3a-a0b6-469b-b79d-3a376f2d5d91"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.679607 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xrmj\" (UniqueName: \"kubernetes.io/projected/22d08e57-ba28-4614-8b11-2bd1bd4f836f-kube-api-access-8xrmj\") pod \"keystone-0fa1-account-create-update-l7dhx\" (UID: \"22d08e57-ba28-4614-8b11-2bd1bd4f836f\") " pod="openstack/keystone-0fa1-account-create-update-l7dhx" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.680583 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-scripts" (OuterVolumeSpecName: "scripts") pod "9125ee3a-a0b6-469b-b79d-3a376f2d5d91" (UID: "9125ee3a-a0b6-469b-b79d-3a376f2d5d91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.689965 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9125ee3a-a0b6-469b-b79d-3a376f2d5d91" (UID: "9125ee3a-a0b6-469b-b79d-3a376f2d5d91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.695224 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9125ee3a-a0b6-469b-b79d-3a376f2d5d91" (UID: "9125ee3a-a0b6-469b-b79d-3a376f2d5d91"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.759606 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.759647 4699 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.759657 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6d5r\" (UniqueName: \"kubernetes.io/projected/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-kube-api-access-k6d5r\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.759673 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.759685 4699 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9125ee3a-a0b6-469b-b79d-3a376f2d5d91-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.786062 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-htqpz" Feb 26 11:30:46 crc kubenswrapper[4699]: I0226 11:30:46.877635 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0fa1-account-create-update-l7dhx" Feb 26 11:30:47 crc kubenswrapper[4699]: I0226 11:30:47.113319 4699 generic.go:334] "Generic (PLEG): container finished" podID="9c209748-0c47-4bbb-883b-f4c245b6a156" containerID="c5f501a1150c4caded935575b10f8f9230324616853238eace0db08d01347483" exitCode=0 Feb 26 11:30:47 crc kubenswrapper[4699]: I0226 11:30:47.113401 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f9e8-account-create-update-zqq4d" event={"ID":"9c209748-0c47-4bbb-883b-f4c245b6a156","Type":"ContainerDied","Data":"c5f501a1150c4caded935575b10f8f9230324616853238eace0db08d01347483"} Feb 26 11:30:47 crc kubenswrapper[4699]: I0226 11:30:47.113716 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f9e8-account-create-update-zqq4d" event={"ID":"9c209748-0c47-4bbb-883b-f4c245b6a156","Type":"ContainerStarted","Data":"4462b064acce42a7acadd47c88ba90a846e89e85ca3f9f837ec64f18e4763308"} Feb 26 11:30:47 crc kubenswrapper[4699]: I0226 11:30:47.126796 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lqqdx" event={"ID":"9125ee3a-a0b6-469b-b79d-3a376f2d5d91","Type":"ContainerDied","Data":"a3441a4af4d4bdfc7161195190910ab39fbc98b45c549fbd1135470c64e160b7"} Feb 26 11:30:47 crc kubenswrapper[4699]: I0226 11:30:47.126856 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3441a4af4d4bdfc7161195190910ab39fbc98b45c549fbd1135470c64e160b7" Feb 26 11:30:47 crc kubenswrapper[4699]: I0226 11:30:47.126950 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lqqdx" Feb 26 11:30:47 crc kubenswrapper[4699]: I0226 11:30:47.140325 4699 generic.go:334] "Generic (PLEG): container finished" podID="0e74821a-c4e5-4812-829d-c6b60b6657b8" containerID="e9c4f64540efb8ca94268435547206be7e8a21ea869414c0e0fe3fdc2ad23ae0" exitCode=0 Feb 26 11:30:47 crc kubenswrapper[4699]: I0226 11:30:47.140379 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nhpn8" event={"ID":"0e74821a-c4e5-4812-829d-c6b60b6657b8","Type":"ContainerDied","Data":"e9c4f64540efb8ca94268435547206be7e8a21ea869414c0e0fe3fdc2ad23ae0"} Feb 26 11:30:47 crc kubenswrapper[4699]: I0226 11:30:47.140412 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nhpn8" event={"ID":"0e74821a-c4e5-4812-829d-c6b60b6657b8","Type":"ContainerStarted","Data":"1e0f55984a322052ab8a31e913928c3a3606b3448556d7abf749cfd0762d8811"} Feb 26 11:30:47 crc kubenswrapper[4699]: I0226 11:30:47.717438 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0fa1-account-create-update-l7dhx"] Feb 26 11:30:47 crc kubenswrapper[4699]: W0226 11:30:47.722102 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d08e57_ba28_4614_8b11_2bd1bd4f836f.slice/crio-20e4ebfd1d0243f30ce7e375a5ce55aba074451b2b65a840da783bdddd8e987b WatchSource:0}: Error finding container 20e4ebfd1d0243f30ce7e375a5ce55aba074451b2b65a840da783bdddd8e987b: Status 404 returned error can't find the container with id 20e4ebfd1d0243f30ce7e375a5ce55aba074451b2b65a840da783bdddd8e987b Feb 26 11:30:47 crc kubenswrapper[4699]: I0226 11:30:47.858824 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-htqpz"] Feb 26 11:30:47 crc kubenswrapper[4699]: W0226 11:30:47.863220 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64b0134d_d882_4622_86a4_ab8172ee4fb2.slice/crio-ae8f44b486fa15025b9de2e5a645bec5ea7de597824b9c9e5ed6ba992765d39a WatchSource:0}: Error finding container ae8f44b486fa15025b9de2e5a645bec5ea7de597824b9c9e5ed6ba992765d39a: Status 404 returned error can't find the container with id ae8f44b486fa15025b9de2e5a645bec5ea7de597824b9c9e5ed6ba992765d39a Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.150108 4699 generic.go:334] "Generic (PLEG): container finished" podID="22d08e57-ba28-4614-8b11-2bd1bd4f836f" containerID="f56c01ae851446ecb80715a4bf6a848caa81425dc5709a8852bd80e336fdb67f" exitCode=0 Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.150216 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0fa1-account-create-update-l7dhx" event={"ID":"22d08e57-ba28-4614-8b11-2bd1bd4f836f","Type":"ContainerDied","Data":"f56c01ae851446ecb80715a4bf6a848caa81425dc5709a8852bd80e336fdb67f"} Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.150251 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0fa1-account-create-update-l7dhx" event={"ID":"22d08e57-ba28-4614-8b11-2bd1bd4f836f","Type":"ContainerStarted","Data":"20e4ebfd1d0243f30ce7e375a5ce55aba074451b2b65a840da783bdddd8e987b"} Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.152782 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"1abd5235054d6438660e4cd9e2b87298f012eb8e00a183cdffc64128ea761a95"} Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.152828 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"25c64821bb164d049d829a26cdc22e4714b6156e57d06d5ef5f8043bc28be1f8"} Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.152838 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"1b2eb469c18895b34d970e41a0f5223711facf0c299ecd2aec6d5a15360c562f"} Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.152847 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"69d593c1a30e3c31c8ab0e87969a35f50e8b930c935e4fe7853873452f097520"} Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.155562 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-htqpz" event={"ID":"64b0134d-d882-4622-86a4-ab8172ee4fb2","Type":"ContainerStarted","Data":"9e6e239d14eb5fdc0f0fee3107f485263c4c1938d985d9c817ca4f3885c7de71"} Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.155588 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-htqpz" event={"ID":"64b0134d-d882-4622-86a4-ab8172ee4fb2","Type":"ContainerStarted","Data":"ae8f44b486fa15025b9de2e5a645bec5ea7de597824b9c9e5ed6ba992765d39a"} Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.193811 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-htqpz" podStartSLOduration=2.193792905 podStartE2EDuration="2.193792905s" podCreationTimestamp="2026-02-26 11:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:30:48.19223649 +0000 UTC m=+1194.003062914" watchObservedRunningTime="2026-02-26 11:30:48.193792905 +0000 UTC m=+1194.004619339" Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.600280 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f9e8-account-create-update-zqq4d" Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.607254 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nhpn8" Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.712880 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f7fx\" (UniqueName: \"kubernetes.io/projected/9c209748-0c47-4bbb-883b-f4c245b6a156-kube-api-access-2f7fx\") pod \"9c209748-0c47-4bbb-883b-f4c245b6a156\" (UID: \"9c209748-0c47-4bbb-883b-f4c245b6a156\") " Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.712974 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74821a-c4e5-4812-829d-c6b60b6657b8-operator-scripts\") pod \"0e74821a-c4e5-4812-829d-c6b60b6657b8\" (UID: \"0e74821a-c4e5-4812-829d-c6b60b6657b8\") " Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.713798 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e74821a-c4e5-4812-829d-c6b60b6657b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e74821a-c4e5-4812-829d-c6b60b6657b8" (UID: "0e74821a-c4e5-4812-829d-c6b60b6657b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.713921 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c209748-0c47-4bbb-883b-f4c245b6a156-operator-scripts\") pod \"9c209748-0c47-4bbb-883b-f4c245b6a156\" (UID: \"9c209748-0c47-4bbb-883b-f4c245b6a156\") " Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.714425 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c209748-0c47-4bbb-883b-f4c245b6a156-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c209748-0c47-4bbb-883b-f4c245b6a156" (UID: "9c209748-0c47-4bbb-883b-f4c245b6a156"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.714483 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzxwx\" (UniqueName: \"kubernetes.io/projected/0e74821a-c4e5-4812-829d-c6b60b6657b8-kube-api-access-fzxwx\") pod \"0e74821a-c4e5-4812-829d-c6b60b6657b8\" (UID: \"0e74821a-c4e5-4812-829d-c6b60b6657b8\") " Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.715306 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c209748-0c47-4bbb-883b-f4c245b6a156-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.715331 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74821a-c4e5-4812-829d-c6b60b6657b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.718903 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c209748-0c47-4bbb-883b-f4c245b6a156-kube-api-access-2f7fx" (OuterVolumeSpecName: "kube-api-access-2f7fx") pod "9c209748-0c47-4bbb-883b-f4c245b6a156" (UID: "9c209748-0c47-4bbb-883b-f4c245b6a156"). InnerVolumeSpecName "kube-api-access-2f7fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.718967 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e74821a-c4e5-4812-829d-c6b60b6657b8-kube-api-access-fzxwx" (OuterVolumeSpecName: "kube-api-access-fzxwx") pod "0e74821a-c4e5-4812-829d-c6b60b6657b8" (UID: "0e74821a-c4e5-4812-829d-c6b60b6657b8"). InnerVolumeSpecName "kube-api-access-fzxwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.770872 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-47sx2"] Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.776778 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-47sx2"] Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.817359 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzxwx\" (UniqueName: \"kubernetes.io/projected/0e74821a-c4e5-4812-829d-c6b60b6657b8-kube-api-access-fzxwx\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:48 crc kubenswrapper[4699]: I0226 11:30:48.817397 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f7fx\" (UniqueName: \"kubernetes.io/projected/9c209748-0c47-4bbb-883b-f4c245b6a156-kube-api-access-2f7fx\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.164525 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nhpn8" event={"ID":"0e74821a-c4e5-4812-829d-c6b60b6657b8","Type":"ContainerDied","Data":"1e0f55984a322052ab8a31e913928c3a3606b3448556d7abf749cfd0762d8811"} Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.164570 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e0f55984a322052ab8a31e913928c3a3606b3448556d7abf749cfd0762d8811" Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.164677 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nhpn8" Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.168399 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f9e8-account-create-update-zqq4d" event={"ID":"9c209748-0c47-4bbb-883b-f4c245b6a156","Type":"ContainerDied","Data":"4462b064acce42a7acadd47c88ba90a846e89e85ca3f9f837ec64f18e4763308"} Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.168445 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4462b064acce42a7acadd47c88ba90a846e89e85ca3f9f837ec64f18e4763308" Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.168508 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f9e8-account-create-update-zqq4d" Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.184372 4699 generic.go:334] "Generic (PLEG): container finished" podID="64b0134d-d882-4622-86a4-ab8172ee4fb2" containerID="9e6e239d14eb5fdc0f0fee3107f485263c4c1938d985d9c817ca4f3885c7de71" exitCode=0 Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.184418 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-htqpz" event={"ID":"64b0134d-d882-4622-86a4-ab8172ee4fb2","Type":"ContainerDied","Data":"9e6e239d14eb5fdc0f0fee3107f485263c4c1938d985d9c817ca4f3885c7de71"} Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.458695 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0fa1-account-create-update-l7dhx" Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.528399 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d08e57-ba28-4614-8b11-2bd1bd4f836f-operator-scripts\") pod \"22d08e57-ba28-4614-8b11-2bd1bd4f836f\" (UID: \"22d08e57-ba28-4614-8b11-2bd1bd4f836f\") " Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.528544 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xrmj\" (UniqueName: \"kubernetes.io/projected/22d08e57-ba28-4614-8b11-2bd1bd4f836f-kube-api-access-8xrmj\") pod \"22d08e57-ba28-4614-8b11-2bd1bd4f836f\" (UID: \"22d08e57-ba28-4614-8b11-2bd1bd4f836f\") " Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.528946 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22d08e57-ba28-4614-8b11-2bd1bd4f836f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22d08e57-ba28-4614-8b11-2bd1bd4f836f" (UID: "22d08e57-ba28-4614-8b11-2bd1bd4f836f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.533733 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d08e57-ba28-4614-8b11-2bd1bd4f836f-kube-api-access-8xrmj" (OuterVolumeSpecName: "kube-api-access-8xrmj") pod "22d08e57-ba28-4614-8b11-2bd1bd4f836f" (UID: "22d08e57-ba28-4614-8b11-2bd1bd4f836f"). InnerVolumeSpecName "kube-api-access-8xrmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.630919 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d08e57-ba28-4614-8b11-2bd1bd4f836f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:49 crc kubenswrapper[4699]: I0226 11:30:49.631205 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xrmj\" (UniqueName: \"kubernetes.io/projected/22d08e57-ba28-4614-8b11-2bd1bd4f836f-kube-api-access-8xrmj\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.191358 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0fa1-account-create-update-l7dhx" event={"ID":"22d08e57-ba28-4614-8b11-2bd1bd4f836f","Type":"ContainerDied","Data":"20e4ebfd1d0243f30ce7e375a5ce55aba074451b2b65a840da783bdddd8e987b"} Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.191569 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20e4ebfd1d0243f30ce7e375a5ce55aba074451b2b65a840da783bdddd8e987b" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.191618 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0fa1-account-create-update-l7dhx" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.196004 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"902223d261f9e97320c9d132642f54dce273be4a13565ccb3c24bd0da65d0020"} Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.196066 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"edae6eb35b2125381e91e42d3750d4c44d05d7a03eef0a838129137f2b9566b3"} Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.196081 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"b5a728b521560aedc5243bcdef0a6a51a51a76fd36b9b40122e6e0396a142810"} Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.196094 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"937eebe4bac690c01308a25068e0989e77f70ee4b7fca45b48883de7fb571197"} Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.272589 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acabaa2a-471d-49a2-9e75-b5c1a8eb590e" path="/var/lib/kubelet/pods/acabaa2a-471d-49a2-9e75-b5c1a8eb590e/volumes" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.558770 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-htqpz" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.605291 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nrvng" podUID="cd4015f0-f1a7-40d7-ae69-089f74a6873d" containerName="ovn-controller" probeResult="failure" output=< Feb 26 11:30:50 crc kubenswrapper[4699]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 26 11:30:50 crc kubenswrapper[4699]: > Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.637108 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.649943 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54x8m\" (UniqueName: \"kubernetes.io/projected/64b0134d-d882-4622-86a4-ab8172ee4fb2-kube-api-access-54x8m\") pod \"64b0134d-d882-4622-86a4-ab8172ee4fb2\" (UID: \"64b0134d-d882-4622-86a4-ab8172ee4fb2\") " Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.650000 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b0134d-d882-4622-86a4-ab8172ee4fb2-operator-scripts\") pod \"64b0134d-d882-4622-86a4-ab8172ee4fb2\" (UID: \"64b0134d-d882-4622-86a4-ab8172ee4fb2\") " Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.650663 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64b0134d-d882-4622-86a4-ab8172ee4fb2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64b0134d-d882-4622-86a4-ab8172ee4fb2" (UID: "64b0134d-d882-4622-86a4-ab8172ee4fb2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.654577 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b0134d-d882-4622-86a4-ab8172ee4fb2-kube-api-access-54x8m" (OuterVolumeSpecName: "kube-api-access-54x8m") pod "64b0134d-d882-4622-86a4-ab8172ee4fb2" (UID: "64b0134d-d882-4622-86a4-ab8172ee4fb2"). InnerVolumeSpecName "kube-api-access-54x8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.655911 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gxnxl" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.751531 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54x8m\" (UniqueName: \"kubernetes.io/projected/64b0134d-d882-4622-86a4-ab8172ee4fb2-kube-api-access-54x8m\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.751860 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b0134d-d882-4622-86a4-ab8172ee4fb2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.753471 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-nblvp"] Feb 26 11:30:50 crc kubenswrapper[4699]: E0226 11:30:50.753851 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e74821a-c4e5-4812-829d-c6b60b6657b8" containerName="mariadb-database-create" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.753871 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e74821a-c4e5-4812-829d-c6b60b6657b8" containerName="mariadb-database-create" Feb 26 11:30:50 crc kubenswrapper[4699]: E0226 11:30:50.753892 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d08e57-ba28-4614-8b11-2bd1bd4f836f" containerName="mariadb-account-create-update" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.753900 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d08e57-ba28-4614-8b11-2bd1bd4f836f" containerName="mariadb-account-create-update" Feb 26 11:30:50 crc kubenswrapper[4699]: E0226 11:30:50.753913 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b0134d-d882-4622-86a4-ab8172ee4fb2" containerName="mariadb-database-create" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.753920 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b0134d-d882-4622-86a4-ab8172ee4fb2" containerName="mariadb-database-create" Feb 26 11:30:50 crc kubenswrapper[4699]: E0226 11:30:50.753945 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9125ee3a-a0b6-469b-b79d-3a376f2d5d91" containerName="swift-ring-rebalance" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.753953 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9125ee3a-a0b6-469b-b79d-3a376f2d5d91" containerName="swift-ring-rebalance" Feb 26 11:30:50 crc kubenswrapper[4699]: E0226 11:30:50.753971 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c209748-0c47-4bbb-883b-f4c245b6a156" containerName="mariadb-account-create-update" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.753979 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c209748-0c47-4bbb-883b-f4c245b6a156" containerName="mariadb-account-create-update" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.754183 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b0134d-d882-4622-86a4-ab8172ee4fb2" containerName="mariadb-database-create" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.754198 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="9125ee3a-a0b6-469b-b79d-3a376f2d5d91" containerName="swift-ring-rebalance" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.754207 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e74821a-c4e5-4812-829d-c6b60b6657b8" containerName="mariadb-database-create" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.754219 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d08e57-ba28-4614-8b11-2bd1bd4f836f" containerName="mariadb-account-create-update" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.754229 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c209748-0c47-4bbb-883b-f4c245b6a156" containerName="mariadb-account-create-update" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.754768 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.759024 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.759589 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j4q6c" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.762009 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nblvp"] Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.853627 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-combined-ca-bundle\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.853750 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbj5f\" (UniqueName: \"kubernetes.io/projected/72c1d656-4f85-483b-b7a2-6132b71ae093-kube-api-access-vbj5f\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.853785 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-db-sync-config-data\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.853858 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-config-data\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.901069 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nrvng-config-8jbz5"] Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.902219 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.905193 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.919664 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nrvng-config-8jbz5"] Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.955666 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run-ovn\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.955715 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbj5f\" (UniqueName: \"kubernetes.io/projected/72c1d656-4f85-483b-b7a2-6132b71ae093-kube-api-access-vbj5f\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.955751 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-db-sync-config-data\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.955792 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-config-data\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.955840 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-log-ovn\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.955863 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-additional-scripts\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.955881 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-combined-ca-bundle\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.955907 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.956026 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qczbr\" (UniqueName: \"kubernetes.io/projected/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-kube-api-access-qczbr\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.956171 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-scripts\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.969361 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-config-data\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.971410 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-db-sync-config-data\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.971580 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-combined-ca-bundle\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:50 crc kubenswrapper[4699]: I0226 11:30:50.974576 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbj5f\" (UniqueName: \"kubernetes.io/projected/72c1d656-4f85-483b-b7a2-6132b71ae093-kube-api-access-vbj5f\") pod \"glance-db-sync-nblvp\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.057129 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-log-ovn\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.057494 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-additional-scripts\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.057519 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.057616 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.057512 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-log-ovn\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.057677 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qczbr\" (UniqueName: \"kubernetes.io/projected/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-kube-api-access-qczbr\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.057789 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-scripts\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.057824 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run-ovn\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.057981 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run-ovn\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.058230 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-additional-scripts\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.059848 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-scripts\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.076382 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qczbr\" (UniqueName: \"kubernetes.io/projected/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-kube-api-access-qczbr\") pod \"ovn-controller-nrvng-config-8jbz5\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.084492 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nblvp" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.222357 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.246466 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-htqpz" event={"ID":"64b0134d-d882-4622-86a4-ab8172ee4fb2","Type":"ContainerDied","Data":"ae8f44b486fa15025b9de2e5a645bec5ea7de597824b9c9e5ed6ba992765d39a"} Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.246525 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae8f44b486fa15025b9de2e5a645bec5ea7de597824b9c9e5ed6ba992765d39a" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.246523 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-htqpz" Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.655725 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nblvp"] Feb 26 11:30:51 crc kubenswrapper[4699]: W0226 11:30:51.664345 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72c1d656_4f85_483b_b7a2_6132b71ae093.slice/crio-c8f798d0cd617868616cba798cc6a29d56d3b9e5026ef1c6b84fc3016e7bc40e WatchSource:0}: Error finding container c8f798d0cd617868616cba798cc6a29d56d3b9e5026ef1c6b84fc3016e7bc40e: Status 404 returned error can't find the container with id c8f798d0cd617868616cba798cc6a29d56d3b9e5026ef1c6b84fc3016e7bc40e Feb 26 11:30:51 crc kubenswrapper[4699]: I0226 11:30:51.741431 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nrvng-config-8jbz5"] Feb 26 11:30:51 crc kubenswrapper[4699]: W0226 11:30:51.745534 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac9468ff_bc02_4a6e_83b3_0f6a5a8876a1.slice/crio-ed20c534aca38673aafb1717e8c45b588dfeeb556cb91c29e830654d960b0a70 WatchSource:0}: Error finding container ed20c534aca38673aafb1717e8c45b588dfeeb556cb91c29e830654d960b0a70: Status 404 returned error can't find the container with id ed20c534aca38673aafb1717e8c45b588dfeeb556cb91c29e830654d960b0a70 Feb 26 11:30:52 crc kubenswrapper[4699]: I0226 11:30:52.260952 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nrvng-config-8jbz5" event={"ID":"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1","Type":"ContainerStarted","Data":"c6b236ca3c3f327dbd547c137704ae3085c07d33a8a0f68103faaa60a3289bc1"} Feb 26 11:30:52 crc kubenswrapper[4699]: I0226 11:30:52.261251 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nrvng-config-8jbz5" event={"ID":"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1","Type":"ContainerStarted","Data":"ed20c534aca38673aafb1717e8c45b588dfeeb556cb91c29e830654d960b0a70"} Feb 26 11:30:52 crc kubenswrapper[4699]: I0226 11:30:52.275267 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nblvp" event={"ID":"72c1d656-4f85-483b-b7a2-6132b71ae093","Type":"ContainerStarted","Data":"c8f798d0cd617868616cba798cc6a29d56d3b9e5026ef1c6b84fc3016e7bc40e"} Feb 26 11:30:52 crc kubenswrapper[4699]: I0226 11:30:52.280829 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"559c831350972c9fb72080a6ca387d917a3cd3d6a836ca18541ac2393f96585f"} Feb 26 11:30:52 crc kubenswrapper[4699]: I0226 11:30:52.280869 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"9b80de6c685d283bba6c9ee68dbafa5d881b0a8dafd5adde3ceebf5eb7dace4f"} Feb 26 11:30:52 crc kubenswrapper[4699]: I0226 11:30:52.280881 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"2e907cbd0611162733bea4638053f70d7b24fa340784ef231bfd03eeac9a5c47"} Feb 26 11:30:52 crc kubenswrapper[4699]: I0226 11:30:52.280890 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"852545b613e8bd35812950cb85024e453bb58b7b731c8a57fcbeeee92353a0e6"} Feb 26 11:30:52 crc kubenswrapper[4699]: I0226 11:30:52.280898 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"0cacbba609d21ec506f709c084156a2064c87e2d9dbdd77a8c579730774c901a"} Feb 26 11:30:52 crc kubenswrapper[4699]: I0226 11:30:52.308324 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nrvng-config-8jbz5" podStartSLOduration=2.308299368 podStartE2EDuration="2.308299368s" podCreationTimestamp="2026-02-26 11:30:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:30:52.300285396 +0000 UTC m=+1198.111111820" watchObservedRunningTime="2026-02-26 11:30:52.308299368 +0000 UTC m=+1198.119125812" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.296144 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"19e6eb1a84328da44fcedd6164617f9e6ed1da2193ac5b1e14579975aee97b6c"} Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.296503 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f23ec57b-7ab1-4152-8108-e0e27b4ba95c","Type":"ContainerStarted","Data":"4598d639014915e11348a97d71f0dd85c79c60c306a6f9b7337450e6d52b6f98"} Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.300943 4699 generic.go:334] "Generic (PLEG): container finished" podID="ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1" containerID="c6b236ca3c3f327dbd547c137704ae3085c07d33a8a0f68103faaa60a3289bc1" exitCode=0 Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.300995 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nrvng-config-8jbz5" event={"ID":"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1","Type":"ContainerDied","Data":"c6b236ca3c3f327dbd547c137704ae3085c07d33a8a0f68103faaa60a3289bc1"} Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.382108 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.021940015 podStartE2EDuration="26.382091648s" podCreationTimestamp="2026-02-26 11:30:27 +0000 UTC" firstStartedPulling="2026-02-26 11:30:45.712596073 +0000 UTC m=+1191.523422507" lastFinishedPulling="2026-02-26 11:30:51.072747706 +0000 UTC m=+1196.883574140" observedRunningTime="2026-02-26 11:30:53.352983308 +0000 UTC m=+1199.163809762" watchObservedRunningTime="2026-02-26 11:30:53.382091648 +0000 UTC m=+1199.192918082" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.639012 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-c6zn7"] Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.641168 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.643849 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.645458 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-c6zn7"] Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.711528 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-config\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.711603 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.711688 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.711718 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrxfq\" (UniqueName: \"kubernetes.io/projected/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-kube-api-access-lrxfq\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.711918 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.712023 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.771499 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gjgfc"] Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.774316 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gjgfc" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.776320 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.780741 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gjgfc"] Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.814073 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-config\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.814169 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.814229 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.814253 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrxfq\" (UniqueName: \"kubernetes.io/projected/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-kube-api-access-lrxfq\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.814275 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.814295 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c102f5c-cbaf-429e-b487-8b179f989720-operator-scripts\") pod \"root-account-create-update-gjgfc\" (UID: \"7c102f5c-cbaf-429e-b487-8b179f989720\") " pod="openstack/root-account-create-update-gjgfc" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.814322 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.814342 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q42gd\" (UniqueName: \"kubernetes.io/projected/7c102f5c-cbaf-429e-b487-8b179f989720-kube-api-access-q42gd\") pod \"root-account-create-update-gjgfc\" (UID: \"7c102f5c-cbaf-429e-b487-8b179f989720\") " pod="openstack/root-account-create-update-gjgfc" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.815278 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-config\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.815808 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.816344 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.816907 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.818737 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.834325 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrxfq\" (UniqueName: \"kubernetes.io/projected/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-kube-api-access-lrxfq\") pod \"dnsmasq-dns-6d5b6d6b67-c6zn7\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.916096 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c102f5c-cbaf-429e-b487-8b179f989720-operator-scripts\") pod \"root-account-create-update-gjgfc\" (UID: \"7c102f5c-cbaf-429e-b487-8b179f989720\") " pod="openstack/root-account-create-update-gjgfc" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.916198 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q42gd\" (UniqueName: \"kubernetes.io/projected/7c102f5c-cbaf-429e-b487-8b179f989720-kube-api-access-q42gd\") pod \"root-account-create-update-gjgfc\" (UID: \"7c102f5c-cbaf-429e-b487-8b179f989720\") " pod="openstack/root-account-create-update-gjgfc" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.917357 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c102f5c-cbaf-429e-b487-8b179f989720-operator-scripts\") pod \"root-account-create-update-gjgfc\" (UID: \"7c102f5c-cbaf-429e-b487-8b179f989720\") " pod="openstack/root-account-create-update-gjgfc" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.931765 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q42gd\" (UniqueName: \"kubernetes.io/projected/7c102f5c-cbaf-429e-b487-8b179f989720-kube-api-access-q42gd\") pod \"root-account-create-update-gjgfc\" (UID: \"7c102f5c-cbaf-429e-b487-8b179f989720\") " pod="openstack/root-account-create-update-gjgfc" Feb 26 11:30:53 crc kubenswrapper[4699]: I0226 11:30:53.976159 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:54 crc kubenswrapper[4699]: I0226 11:30:54.095966 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gjgfc" Feb 26 11:30:55 crc kubenswrapper[4699]: I0226 11:30:54.419597 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-c6zn7"] Feb 26 11:30:55 crc kubenswrapper[4699]: I0226 11:30:55.323005 4699 generic.go:334] "Generic (PLEG): container finished" podID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerID="9682f0a3316099cd400015d1d5abe7c7f75f2f43640ff21520a7cddc2ba23260" exitCode=0 Feb 26 11:30:55 crc kubenswrapper[4699]: I0226 11:30:55.323379 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" event={"ID":"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9","Type":"ContainerDied","Data":"9682f0a3316099cd400015d1d5abe7c7f75f2f43640ff21520a7cddc2ba23260"} Feb 26 11:30:55 crc kubenswrapper[4699]: I0226 11:30:55.323414 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" event={"ID":"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9","Type":"ContainerStarted","Data":"db12e6ab7e70b99da81ac4834b205007d7df170db9b9e0a8bd4ab5007bbb10d9"} Feb 26 11:30:55 crc kubenswrapper[4699]: I0226 11:30:55.592784 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nrvng" Feb 26 11:30:55 crc kubenswrapper[4699]: I0226 11:30:55.956920 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.037334 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gjgfc"] Feb 26 11:30:56 crc kubenswrapper[4699]: W0226 11:30:56.040334 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c102f5c_cbaf_429e_b487_8b179f989720.slice/crio-bfe030826ccd0b67fb14d360359010d9b579493ba9a7174535b351cb92366fa9 WatchSource:0}: Error finding container bfe030826ccd0b67fb14d360359010d9b579493ba9a7174535b351cb92366fa9: Status 404 returned error can't find the container with id bfe030826ccd0b67fb14d360359010d9b579493ba9a7174535b351cb92366fa9 Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.054993 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run\") pod \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.055161 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run-ovn\") pod \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.055226 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-scripts\") pod \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.055272 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qczbr\" (UniqueName: \"kubernetes.io/projected/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-kube-api-access-qczbr\") pod \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.055311 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-additional-scripts\") pod \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.055361 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-log-ovn\") pod \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\" (UID: \"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1\") " Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.055145 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run" (OuterVolumeSpecName: "var-run") pod "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1" (UID: "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.055194 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1" (UID: "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.055690 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1" (UID: "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.056076 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1" (UID: "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.056674 4699 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.056699 4699 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.056714 4699 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.056724 4699 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-var-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.057146 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-scripts" (OuterVolumeSpecName: "scripts") pod "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1" (UID: "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.061714 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-kube-api-access-qczbr" (OuterVolumeSpecName: "kube-api-access-qczbr") pod "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1" (UID: "ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1"). InnerVolumeSpecName "kube-api-access-qczbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.160774 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.161220 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qczbr\" (UniqueName: \"kubernetes.io/projected/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1-kube-api-access-qczbr\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.332215 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nrvng-config-8jbz5" event={"ID":"ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1","Type":"ContainerDied","Data":"ed20c534aca38673aafb1717e8c45b588dfeeb556cb91c29e830654d960b0a70"} Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.332253 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed20c534aca38673aafb1717e8c45b588dfeeb556cb91c29e830654d960b0a70" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.332231 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nrvng-config-8jbz5" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.333534 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gjgfc" event={"ID":"7c102f5c-cbaf-429e-b487-8b179f989720","Type":"ContainerStarted","Data":"bfe030826ccd0b67fb14d360359010d9b579493ba9a7174535b351cb92366fa9"} Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.335243 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" event={"ID":"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9","Type":"ContainerStarted","Data":"fe976bbefde2fa99a8167c39df0e86003afc4a567d5a020335a060d2c650e894"} Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.336364 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:30:56 crc kubenswrapper[4699]: I0226 11:30:56.357384 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" podStartSLOduration=3.357340549 podStartE2EDuration="3.357340549s" podCreationTimestamp="2026-02-26 11:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:30:56.355054333 +0000 UTC m=+1202.165880797" watchObservedRunningTime="2026-02-26 11:30:56.357340549 +0000 UTC m=+1202.168167003" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.041711 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nrvng-config-8jbz5"] Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.048915 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nrvng-config-8jbz5"] Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.195092 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nrvng-config-gbwh4"] Feb 26 11:30:57 crc kubenswrapper[4699]: E0226 11:30:57.195541 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1" containerName="ovn-config" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.195566 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1" containerName="ovn-config" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.195787 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1" containerName="ovn-config" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.196466 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.198792 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.224726 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nrvng-config-gbwh4"] Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.279216 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-log-ovn\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.279274 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xbdb\" (UniqueName: \"kubernetes.io/projected/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-kube-api-access-2xbdb\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.279301 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.279324 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-scripts\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.279477 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run-ovn\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.279547 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-additional-scripts\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.381348 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-log-ovn\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.381691 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xbdb\" (UniqueName: \"kubernetes.io/projected/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-kube-api-access-2xbdb\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.381713 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.381728 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-scripts\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.381775 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run-ovn\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.381797 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-additional-scripts\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.381859 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.381998 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-log-ovn\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.382576 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run-ovn\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.383241 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-additional-scripts\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.384313 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-scripts\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.399142 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xbdb\" (UniqueName: \"kubernetes.io/projected/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-kube-api-access-2xbdb\") pod \"ovn-controller-nrvng-config-gbwh4\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:57 crc kubenswrapper[4699]: I0226 11:30:57.511886 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:30:58 crc kubenswrapper[4699]: I0226 11:30:58.270182 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1" path="/var/lib/kubelet/pods/ac9468ff-bc02-4a6e-83b3-0f6a5a8876a1/volumes" Feb 26 11:30:58 crc kubenswrapper[4699]: I0226 11:30:58.355838 4699 generic.go:334] "Generic (PLEG): container finished" podID="7c102f5c-cbaf-429e-b487-8b179f989720" containerID="91516e9d3caed541543b28d1d1f9c624822ee3d8a280a0f3e6e9514175f1fe30" exitCode=0 Feb 26 11:30:58 crc kubenswrapper[4699]: I0226 11:30:58.359977 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gjgfc" event={"ID":"7c102f5c-cbaf-429e-b487-8b179f989720","Type":"ContainerDied","Data":"91516e9d3caed541543b28d1d1f9c624822ee3d8a280a0f3e6e9514175f1fe30"} Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.340313 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.402292 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.658466 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-v77r5"] Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.659910 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v77r5" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.700605 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v77r5"] Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.772319 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a7a2-account-create-update-l2mt4"] Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.773976 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a7a2-account-create-update-l2mt4" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.781165 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-operator-scripts\") pod \"cinder-db-create-v77r5\" (UID: \"758bbe1c-d826-47f7-aff6-54e9fc4ebe63\") " pod="openstack/cinder-db-create-v77r5" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.781634 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xts5\" (UniqueName: \"kubernetes.io/projected/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-kube-api-access-5xts5\") pod \"cinder-db-create-v77r5\" (UID: \"758bbe1c-d826-47f7-aff6-54e9fc4ebe63\") " pod="openstack/cinder-db-create-v77r5" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.785188 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.793689 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a7a2-account-create-update-l2mt4"] Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.853924 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bl9wp"] Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.855762 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bl9wp" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.881853 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bl9wp"] Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.886593 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a68fa18-1c49-4d3d-bc5f-75763944d818-operator-scripts\") pod \"cinder-a7a2-account-create-update-l2mt4\" (UID: \"7a68fa18-1c49-4d3d-bc5f-75763944d818\") " pod="openstack/cinder-a7a2-account-create-update-l2mt4" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.886935 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-operator-scripts\") pod \"cinder-db-create-v77r5\" (UID: \"758bbe1c-d826-47f7-aff6-54e9fc4ebe63\") " pod="openstack/cinder-db-create-v77r5" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.887154 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xts5\" (UniqueName: \"kubernetes.io/projected/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-kube-api-access-5xts5\") pod \"cinder-db-create-v77r5\" (UID: \"758bbe1c-d826-47f7-aff6-54e9fc4ebe63\") " pod="openstack/cinder-db-create-v77r5" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.887195 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8wvk\" (UniqueName: \"kubernetes.io/projected/7a68fa18-1c49-4d3d-bc5f-75763944d818-kube-api-access-b8wvk\") pod \"cinder-a7a2-account-create-update-l2mt4\" (UID: \"7a68fa18-1c49-4d3d-bc5f-75763944d818\") " pod="openstack/cinder-a7a2-account-create-update-l2mt4" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.888109 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-operator-scripts\") pod \"cinder-db-create-v77r5\" (UID: \"758bbe1c-d826-47f7-aff6-54e9fc4ebe63\") " pod="openstack/cinder-db-create-v77r5" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.920373 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xts5\" (UniqueName: \"kubernetes.io/projected/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-kube-api-access-5xts5\") pod \"cinder-db-create-v77r5\" (UID: \"758bbe1c-d826-47f7-aff6-54e9fc4ebe63\") " pod="openstack/cinder-db-create-v77r5" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.969964 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4fx8g"] Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.971172 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4fx8g" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.979508 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4fx8g"] Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.988667 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8wvk\" (UniqueName: \"kubernetes.io/projected/7a68fa18-1c49-4d3d-bc5f-75763944d818-kube-api-access-b8wvk\") pod \"cinder-a7a2-account-create-update-l2mt4\" (UID: \"7a68fa18-1c49-4d3d-bc5f-75763944d818\") " pod="openstack/cinder-a7a2-account-create-update-l2mt4" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.988744 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftbnl\" (UniqueName: \"kubernetes.io/projected/1029eddb-2336-4ec5-af4a-b8fed82d3d55-kube-api-access-ftbnl\") pod \"barbican-db-create-bl9wp\" (UID: \"1029eddb-2336-4ec5-af4a-b8fed82d3d55\") " pod="openstack/barbican-db-create-bl9wp" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.988845 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1029eddb-2336-4ec5-af4a-b8fed82d3d55-operator-scripts\") pod \"barbican-db-create-bl9wp\" (UID: \"1029eddb-2336-4ec5-af4a-b8fed82d3d55\") " pod="openstack/barbican-db-create-bl9wp" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.988900 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a68fa18-1c49-4d3d-bc5f-75763944d818-operator-scripts\") pod \"cinder-a7a2-account-create-update-l2mt4\" (UID: \"7a68fa18-1c49-4d3d-bc5f-75763944d818\") " pod="openstack/cinder-a7a2-account-create-update-l2mt4" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.994227 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a68fa18-1c49-4d3d-bc5f-75763944d818-operator-scripts\") pod \"cinder-a7a2-account-create-update-l2mt4\" (UID: \"7a68fa18-1c49-4d3d-bc5f-75763944d818\") " pod="openstack/cinder-a7a2-account-create-update-l2mt4" Feb 26 11:31:02 crc kubenswrapper[4699]: I0226 11:31:02.999607 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v77r5" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.012248 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8wvk\" (UniqueName: \"kubernetes.io/projected/7a68fa18-1c49-4d3d-bc5f-75763944d818-kube-api-access-b8wvk\") pod \"cinder-a7a2-account-create-update-l2mt4\" (UID: \"7a68fa18-1c49-4d3d-bc5f-75763944d818\") " pod="openstack/cinder-a7a2-account-create-update-l2mt4" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.059760 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5a9e-account-create-update-fzhw8"] Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.061697 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a9e-account-create-update-fzhw8" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.067465 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.075621 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5a9e-account-create-update-fzhw8"] Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.092970 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a7a2-account-create-update-l2mt4" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.093311 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1029eddb-2336-4ec5-af4a-b8fed82d3d55-operator-scripts\") pod \"barbican-db-create-bl9wp\" (UID: \"1029eddb-2336-4ec5-af4a-b8fed82d3d55\") " pod="openstack/barbican-db-create-bl9wp" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.093354 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-operator-scripts\") pod \"neutron-db-create-4fx8g\" (UID: \"5c9e36d9-5d53-46d8-a91a-22dc9338ab58\") " pod="openstack/neutron-db-create-4fx8g" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.093403 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l6p4\" (UniqueName: \"kubernetes.io/projected/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-kube-api-access-9l6p4\") pod \"neutron-db-create-4fx8g\" (UID: \"5c9e36d9-5d53-46d8-a91a-22dc9338ab58\") " pod="openstack/neutron-db-create-4fx8g" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.093492 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftbnl\" (UniqueName: \"kubernetes.io/projected/1029eddb-2336-4ec5-af4a-b8fed82d3d55-kube-api-access-ftbnl\") pod \"barbican-db-create-bl9wp\" (UID: \"1029eddb-2336-4ec5-af4a-b8fed82d3d55\") " pod="openstack/barbican-db-create-bl9wp" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.094160 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1029eddb-2336-4ec5-af4a-b8fed82d3d55-operator-scripts\") pod \"barbican-db-create-bl9wp\" (UID: \"1029eddb-2336-4ec5-af4a-b8fed82d3d55\") " pod="openstack/barbican-db-create-bl9wp" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.115286 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftbnl\" (UniqueName: \"kubernetes.io/projected/1029eddb-2336-4ec5-af4a-b8fed82d3d55-kube-api-access-ftbnl\") pod \"barbican-db-create-bl9wp\" (UID: \"1029eddb-2336-4ec5-af4a-b8fed82d3d55\") " pod="openstack/barbican-db-create-bl9wp" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.154511 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f3b2-account-create-update-xhgnq"] Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.155540 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f3b2-account-create-update-xhgnq" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.159210 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.171053 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bl9wp" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.188796 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f3b2-account-create-update-xhgnq"] Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.195812 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spwj\" (UniqueName: \"kubernetes.io/projected/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-kube-api-access-7spwj\") pod \"barbican-5a9e-account-create-update-fzhw8\" (UID: \"4c910eba-ce23-4fd9-b08a-54b96fe6a2da\") " pod="openstack/barbican-5a9e-account-create-update-fzhw8" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.195867 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-operator-scripts\") pod \"neutron-db-create-4fx8g\" (UID: \"5c9e36d9-5d53-46d8-a91a-22dc9338ab58\") " pod="openstack/neutron-db-create-4fx8g" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.195911 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l6p4\" (UniqueName: \"kubernetes.io/projected/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-kube-api-access-9l6p4\") pod \"neutron-db-create-4fx8g\" (UID: \"5c9e36d9-5d53-46d8-a91a-22dc9338ab58\") " pod="openstack/neutron-db-create-4fx8g" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.196254 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-operator-scripts\") pod \"barbican-5a9e-account-create-update-fzhw8\" (UID: \"4c910eba-ce23-4fd9-b08a-54b96fe6a2da\") " pod="openstack/barbican-5a9e-account-create-update-fzhw8" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.196936 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-operator-scripts\") pod \"neutron-db-create-4fx8g\" (UID: \"5c9e36d9-5d53-46d8-a91a-22dc9338ab58\") " pod="openstack/neutron-db-create-4fx8g" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.220699 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l6p4\" (UniqueName: \"kubernetes.io/projected/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-kube-api-access-9l6p4\") pod \"neutron-db-create-4fx8g\" (UID: \"5c9e36d9-5d53-46d8-a91a-22dc9338ab58\") " pod="openstack/neutron-db-create-4fx8g" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.225182 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-v9z8k"] Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.229479 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.232114 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.232292 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.232388 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.232469 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbntt" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.252255 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-v9z8k"] Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.298407 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-operator-scripts\") pod \"neutron-f3b2-account-create-update-xhgnq\" (UID: \"8c25243e-b6d9-40f5-9c3b-31947cf74cc9\") " pod="openstack/neutron-f3b2-account-create-update-xhgnq" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.298607 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-operator-scripts\") pod \"barbican-5a9e-account-create-update-fzhw8\" (UID: \"4c910eba-ce23-4fd9-b08a-54b96fe6a2da\") " pod="openstack/barbican-5a9e-account-create-update-fzhw8" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.298762 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7spwj\" (UniqueName: \"kubernetes.io/projected/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-kube-api-access-7spwj\") pod \"barbican-5a9e-account-create-update-fzhw8\" (UID: \"4c910eba-ce23-4fd9-b08a-54b96fe6a2da\") " pod="openstack/barbican-5a9e-account-create-update-fzhw8" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.298889 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vh8z\" (UniqueName: \"kubernetes.io/projected/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-kube-api-access-6vh8z\") pod \"neutron-f3b2-account-create-update-xhgnq\" (UID: \"8c25243e-b6d9-40f5-9c3b-31947cf74cc9\") " pod="openstack/neutron-f3b2-account-create-update-xhgnq" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.301534 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-operator-scripts\") pod \"barbican-5a9e-account-create-update-fzhw8\" (UID: \"4c910eba-ce23-4fd9-b08a-54b96fe6a2da\") " pod="openstack/barbican-5a9e-account-create-update-fzhw8" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.303245 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4fx8g" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.318435 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7spwj\" (UniqueName: \"kubernetes.io/projected/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-kube-api-access-7spwj\") pod \"barbican-5a9e-account-create-update-fzhw8\" (UID: \"4c910eba-ce23-4fd9-b08a-54b96fe6a2da\") " pod="openstack/barbican-5a9e-account-create-update-fzhw8" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.399183 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a9e-account-create-update-fzhw8" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.400716 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsndg\" (UniqueName: \"kubernetes.io/projected/7f040612-306e-4ce2-b289-ed5be7bbc9e3-kube-api-access-hsndg\") pod \"keystone-db-sync-v9z8k\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.400923 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-config-data\") pod \"keystone-db-sync-v9z8k\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.401021 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vh8z\" (UniqueName: \"kubernetes.io/projected/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-kube-api-access-6vh8z\") pod \"neutron-f3b2-account-create-update-xhgnq\" (UID: \"8c25243e-b6d9-40f5-9c3b-31947cf74cc9\") " pod="openstack/neutron-f3b2-account-create-update-xhgnq" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.401091 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-operator-scripts\") pod \"neutron-f3b2-account-create-update-xhgnq\" (UID: \"8c25243e-b6d9-40f5-9c3b-31947cf74cc9\") " pod="openstack/neutron-f3b2-account-create-update-xhgnq" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.401176 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-combined-ca-bundle\") pod \"keystone-db-sync-v9z8k\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.401974 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-operator-scripts\") pod \"neutron-f3b2-account-create-update-xhgnq\" (UID: \"8c25243e-b6d9-40f5-9c3b-31947cf74cc9\") " pod="openstack/neutron-f3b2-account-create-update-xhgnq" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.422108 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vh8z\" (UniqueName: \"kubernetes.io/projected/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-kube-api-access-6vh8z\") pod \"neutron-f3b2-account-create-update-xhgnq\" (UID: \"8c25243e-b6d9-40f5-9c3b-31947cf74cc9\") " pod="openstack/neutron-f3b2-account-create-update-xhgnq" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.502552 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-combined-ca-bundle\") pod \"keystone-db-sync-v9z8k\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.502600 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsndg\" (UniqueName: \"kubernetes.io/projected/7f040612-306e-4ce2-b289-ed5be7bbc9e3-kube-api-access-hsndg\") pod \"keystone-db-sync-v9z8k\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.502650 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-config-data\") pod \"keystone-db-sync-v9z8k\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.503010 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f3b2-account-create-update-xhgnq" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.507484 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-config-data\") pod \"keystone-db-sync-v9z8k\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.511896 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-combined-ca-bundle\") pod \"keystone-db-sync-v9z8k\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.519081 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsndg\" (UniqueName: \"kubernetes.io/projected/7f040612-306e-4ce2-b289-ed5be7bbc9e3-kube-api-access-hsndg\") pod \"keystone-db-sync-v9z8k\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.579108 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:03 crc kubenswrapper[4699]: I0226 11:31:03.978665 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.066710 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6nf48"] Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.066990 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" podUID="2a166832-199a-436c-85a2-4ccde527f180" containerName="dnsmasq-dns" containerID="cri-o://a6963bcffe5d258cd49c8f7db7cd1ef0c3f71763a18cb9d09e9e8a608d9fa6bd" gracePeriod=10 Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.284716 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gjgfc" Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.414041 4699 generic.go:334] "Generic (PLEG): container finished" podID="2a166832-199a-436c-85a2-4ccde527f180" containerID="a6963bcffe5d258cd49c8f7db7cd1ef0c3f71763a18cb9d09e9e8a608d9fa6bd" exitCode=0 Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.414228 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" event={"ID":"2a166832-199a-436c-85a2-4ccde527f180","Type":"ContainerDied","Data":"a6963bcffe5d258cd49c8f7db7cd1ef0c3f71763a18cb9d09e9e8a608d9fa6bd"} Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.422012 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gjgfc" event={"ID":"7c102f5c-cbaf-429e-b487-8b179f989720","Type":"ContainerDied","Data":"bfe030826ccd0b67fb14d360359010d9b579493ba9a7174535b351cb92366fa9"} Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.422045 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe030826ccd0b67fb14d360359010d9b579493ba9a7174535b351cb92366fa9" Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.422064 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gjgfc" Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.434418 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q42gd\" (UniqueName: \"kubernetes.io/projected/7c102f5c-cbaf-429e-b487-8b179f989720-kube-api-access-q42gd\") pod \"7c102f5c-cbaf-429e-b487-8b179f989720\" (UID: \"7c102f5c-cbaf-429e-b487-8b179f989720\") " Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.434510 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c102f5c-cbaf-429e-b487-8b179f989720-operator-scripts\") pod \"7c102f5c-cbaf-429e-b487-8b179f989720\" (UID: \"7c102f5c-cbaf-429e-b487-8b179f989720\") " Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.435051 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c102f5c-cbaf-429e-b487-8b179f989720-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c102f5c-cbaf-429e-b487-8b179f989720" (UID: "7c102f5c-cbaf-429e-b487-8b179f989720"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.435972 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c102f5c-cbaf-429e-b487-8b179f989720-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.454927 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c102f5c-cbaf-429e-b487-8b179f989720-kube-api-access-q42gd" (OuterVolumeSpecName: "kube-api-access-q42gd") pod "7c102f5c-cbaf-429e-b487-8b179f989720" (UID: "7c102f5c-cbaf-429e-b487-8b179f989720"). InnerVolumeSpecName "kube-api-access-q42gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:04 crc kubenswrapper[4699]: I0226 11:31:04.538006 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q42gd\" (UniqueName: \"kubernetes.io/projected/7c102f5c-cbaf-429e-b487-8b179f989720-kube-api-access-q42gd\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.005973 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bl9wp"] Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.020025 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5a9e-account-create-update-fzhw8"] Feb 26 11:31:05 crc kubenswrapper[4699]: W0226 11:31:05.027120 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c910eba_ce23_4fd9_b08a_54b96fe6a2da.slice/crio-ccd60b62a46256153e5613142403578ee18ee15936b92911827adb6661d9a59f WatchSource:0}: Error finding container ccd60b62a46256153e5613142403578ee18ee15936b92911827adb6661d9a59f: Status 404 returned error can't find the container with id ccd60b62a46256153e5613142403578ee18ee15936b92911827adb6661d9a59f Feb 26 11:31:05 crc kubenswrapper[4699]: W0226 11:31:05.037322 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1029eddb_2336_4ec5_af4a_b8fed82d3d55.slice/crio-681ac56ff402c4cccc4c7060f9f4f090a2936d815a9584b9ec075bbc793e90c8 WatchSource:0}: Error finding container 681ac56ff402c4cccc4c7060f9f4f090a2936d815a9584b9ec075bbc793e90c8: Status 404 returned error can't find the container with id 681ac56ff402c4cccc4c7060f9f4f090a2936d815a9584b9ec075bbc793e90c8 Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.097844 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.206191 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nrvng-config-gbwh4"] Feb 26 11:31:05 crc kubenswrapper[4699]: W0226 11:31:05.220473 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46fd8768_dfd3_4bb1_b7c9_f4d803bf829f.slice/crio-343bd3b7a46be415742ed56cacad4cbc06c91d65f0c4265fc8e92491ddf876ad WatchSource:0}: Error finding container 343bd3b7a46be415742ed56cacad4cbc06c91d65f0c4265fc8e92491ddf876ad: Status 404 returned error can't find the container with id 343bd3b7a46be415742ed56cacad4cbc06c91d65f0c4265fc8e92491ddf876ad Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.251342 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-sb\") pod \"2a166832-199a-436c-85a2-4ccde527f180\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.251675 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-nb\") pod \"2a166832-199a-436c-85a2-4ccde527f180\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.251793 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-config\") pod \"2a166832-199a-436c-85a2-4ccde527f180\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.251954 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-dns-svc\") pod \"2a166832-199a-436c-85a2-4ccde527f180\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.252098 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pjrg\" (UniqueName: \"kubernetes.io/projected/2a166832-199a-436c-85a2-4ccde527f180-kube-api-access-7pjrg\") pod \"2a166832-199a-436c-85a2-4ccde527f180\" (UID: \"2a166832-199a-436c-85a2-4ccde527f180\") " Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.260836 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a166832-199a-436c-85a2-4ccde527f180-kube-api-access-7pjrg" (OuterVolumeSpecName: "kube-api-access-7pjrg") pod "2a166832-199a-436c-85a2-4ccde527f180" (UID: "2a166832-199a-436c-85a2-4ccde527f180"). InnerVolumeSpecName "kube-api-access-7pjrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.341521 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4fx8g"] Feb 26 11:31:05 crc kubenswrapper[4699]: W0226 11:31:05.342102 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f040612_306e_4ce2_b289_ed5be7bbc9e3.slice/crio-8bd38e2addb3bddd3fc1ffb0bcdd8462f8cbea6db1e2b7de24e31f6134f41fd6 WatchSource:0}: Error finding container 8bd38e2addb3bddd3fc1ffb0bcdd8462f8cbea6db1e2b7de24e31f6134f41fd6: Status 404 returned error can't find the container with id 8bd38e2addb3bddd3fc1ffb0bcdd8462f8cbea6db1e2b7de24e31f6134f41fd6 Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.354012 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pjrg\" (UniqueName: \"kubernetes.io/projected/2a166832-199a-436c-85a2-4ccde527f180-kube-api-access-7pjrg\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.371878 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-v9z8k"] Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.385704 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2a166832-199a-436c-85a2-4ccde527f180" (UID: "2a166832-199a-436c-85a2-4ccde527f180"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.392795 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2a166832-199a-436c-85a2-4ccde527f180" (UID: "2a166832-199a-436c-85a2-4ccde527f180"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:05 crc kubenswrapper[4699]: W0226 11:31:05.399064 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c25243e_b6d9_40f5_9c3b_31947cf74cc9.slice/crio-1e068fdc7bd3850e6a731714657c56632475a810f22a5b66f44175ff08b5063d WatchSource:0}: Error finding container 1e068fdc7bd3850e6a731714657c56632475a810f22a5b66f44175ff08b5063d: Status 404 returned error can't find the container with id 1e068fdc7bd3850e6a731714657c56632475a810f22a5b66f44175ff08b5063d Feb 26 11:31:05 crc kubenswrapper[4699]: W0226 11:31:05.408492 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a68fa18_1c49_4d3d_bc5f_75763944d818.slice/crio-decc5b383a239243923da333848dd63209b18fa929f715b34f5afb37b483d9dc WatchSource:0}: Error finding container decc5b383a239243923da333848dd63209b18fa929f715b34f5afb37b483d9dc: Status 404 returned error can't find the container with id decc5b383a239243923da333848dd63209b18fa929f715b34f5afb37b483d9dc Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.424036 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a166832-199a-436c-85a2-4ccde527f180" (UID: "2a166832-199a-436c-85a2-4ccde527f180"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.430776 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-config" (OuterVolumeSpecName: "config") pod "2a166832-199a-436c-85a2-4ccde527f180" (UID: "2a166832-199a-436c-85a2-4ccde527f180"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.442066 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v77r5"] Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.455691 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.457505 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.457603 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.457659 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a166832-199a-436c-85a2-4ccde527f180-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.465944 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f3b2-account-create-update-xhgnq"] Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.470930 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v9z8k" event={"ID":"7f040612-306e-4ce2-b289-ed5be7bbc9e3","Type":"ContainerStarted","Data":"8bd38e2addb3bddd3fc1ffb0bcdd8462f8cbea6db1e2b7de24e31f6134f41fd6"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.473856 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v77r5" event={"ID":"758bbe1c-d826-47f7-aff6-54e9fc4ebe63","Type":"ContainerStarted","Data":"9e7dbf7b2fb001aa6400d97b6a5c91f4d444c0f906552e3eb3f37bb196932e99"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.479422 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f3b2-account-create-update-xhgnq" event={"ID":"8c25243e-b6d9-40f5-9c3b-31947cf74cc9","Type":"ContainerStarted","Data":"1e068fdc7bd3850e6a731714657c56632475a810f22a5b66f44175ff08b5063d"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.487044 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a7a2-account-create-update-l2mt4"] Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.488953 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" event={"ID":"2a166832-199a-436c-85a2-4ccde527f180","Type":"ContainerDied","Data":"e37733ce4b3de5c1e636da1d778df1b2746e600646623b6c23cb5510f0a9db33"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.489007 4699 scope.go:117] "RemoveContainer" containerID="a6963bcffe5d258cd49c8f7db7cd1ef0c3f71763a18cb9d09e9e8a608d9fa6bd" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.489209 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-6nf48" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.500615 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nblvp" event={"ID":"72c1d656-4f85-483b-b7a2-6132b71ae093","Type":"ContainerStarted","Data":"99b2baa30a79cd9b1afa4299366118e58d2c6c18512f6454267d08d3b636f3e6"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.506159 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bl9wp" event={"ID":"1029eddb-2336-4ec5-af4a-b8fed82d3d55","Type":"ContainerStarted","Data":"7c9888c6347c41b14207598f1324ae87027fe21cf208ac04db043c3350762dde"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.506201 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bl9wp" event={"ID":"1029eddb-2336-4ec5-af4a-b8fed82d3d55","Type":"ContainerStarted","Data":"681ac56ff402c4cccc4c7060f9f4f090a2936d815a9584b9ec075bbc793e90c8"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.510607 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a7a2-account-create-update-l2mt4" event={"ID":"7a68fa18-1c49-4d3d-bc5f-75763944d818","Type":"ContainerStarted","Data":"decc5b383a239243923da333848dd63209b18fa929f715b34f5afb37b483d9dc"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.514719 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nrvng-config-gbwh4" event={"ID":"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f","Type":"ContainerStarted","Data":"343bd3b7a46be415742ed56cacad4cbc06c91d65f0c4265fc8e92491ddf876ad"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.523564 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a9e-account-create-update-fzhw8" event={"ID":"4c910eba-ce23-4fd9-b08a-54b96fe6a2da","Type":"ContainerStarted","Data":"8ac6484a77ece8a11d14d59104b361e660535022ac1b3f3359289cdf598c1ea3"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.523637 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a9e-account-create-update-fzhw8" event={"ID":"4c910eba-ce23-4fd9-b08a-54b96fe6a2da","Type":"ContainerStarted","Data":"ccd60b62a46256153e5613142403578ee18ee15936b92911827adb6661d9a59f"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.532782 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4fx8g" event={"ID":"5c9e36d9-5d53-46d8-a91a-22dc9338ab58","Type":"ContainerStarted","Data":"cc8acab0f309ee31e4dd19e0655a0a1330830aac52464ec3c2b6fa9110dbc2ad"} Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.536104 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-nblvp" podStartSLOduration=2.943763278 podStartE2EDuration="15.536078636s" podCreationTimestamp="2026-02-26 11:30:50 +0000 UTC" firstStartedPulling="2026-02-26 11:30:51.667442721 +0000 UTC m=+1197.478269145" lastFinishedPulling="2026-02-26 11:31:04.259758069 +0000 UTC m=+1210.070584503" observedRunningTime="2026-02-26 11:31:05.530478305 +0000 UTC m=+1211.341304739" watchObservedRunningTime="2026-02-26 11:31:05.536078636 +0000 UTC m=+1211.346905080" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.547650 4699 scope.go:117] "RemoveContainer" containerID="4ad9a83fa9f5197d955a8f1565b66571572dedbb333404d507411352c78978c6" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.589285 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6nf48"] Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.624937 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6nf48"] Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.639525 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-bl9wp" podStartSLOduration=3.6395058430000002 podStartE2EDuration="3.639505843s" podCreationTimestamp="2026-02-26 11:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:05.596670646 +0000 UTC m=+1211.407497080" watchObservedRunningTime="2026-02-26 11:31:05.639505843 +0000 UTC m=+1211.450332277" Feb 26 11:31:05 crc kubenswrapper[4699]: I0226 11:31:05.655781 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5a9e-account-create-update-fzhw8" podStartSLOduration=2.655754522 podStartE2EDuration="2.655754522s" podCreationTimestamp="2026-02-26 11:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:05.614343837 +0000 UTC m=+1211.425170291" watchObservedRunningTime="2026-02-26 11:31:05.655754522 +0000 UTC m=+1211.466580966" Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.277634 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a166832-199a-436c-85a2-4ccde527f180" path="/var/lib/kubelet/pods/2a166832-199a-436c-85a2-4ccde527f180/volumes" Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.541860 4699 generic.go:334] "Generic (PLEG): container finished" podID="8c25243e-b6d9-40f5-9c3b-31947cf74cc9" containerID="d84c1ad7d451293243927fb877d730897ca18c570d340c3870da5a49cf7b4e49" exitCode=0 Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.541913 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f3b2-account-create-update-xhgnq" event={"ID":"8c25243e-b6d9-40f5-9c3b-31947cf74cc9","Type":"ContainerDied","Data":"d84c1ad7d451293243927fb877d730897ca18c570d340c3870da5a49cf7b4e49"} Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.544785 4699 generic.go:334] "Generic (PLEG): container finished" podID="7a68fa18-1c49-4d3d-bc5f-75763944d818" containerID="0d9733430c4e718e7aff62771d81bae98ffdfc65e518351b1e877ae065bfd725" exitCode=0 Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.544847 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a7a2-account-create-update-l2mt4" event={"ID":"7a68fa18-1c49-4d3d-bc5f-75763944d818","Type":"ContainerDied","Data":"0d9733430c4e718e7aff62771d81bae98ffdfc65e518351b1e877ae065bfd725"} Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.549106 4699 generic.go:334] "Generic (PLEG): container finished" podID="46fd8768-dfd3-4bb1-b7c9-f4d803bf829f" containerID="dad7fa90e67d3f965c26f7c4abb45503a74b01c5861c388e8b2b6571901121e5" exitCode=0 Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.549155 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nrvng-config-gbwh4" event={"ID":"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f","Type":"ContainerDied","Data":"dad7fa90e67d3f965c26f7c4abb45503a74b01c5861c388e8b2b6571901121e5"} Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.551196 4699 generic.go:334] "Generic (PLEG): container finished" podID="4c910eba-ce23-4fd9-b08a-54b96fe6a2da" containerID="8ac6484a77ece8a11d14d59104b361e660535022ac1b3f3359289cdf598c1ea3" exitCode=0 Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.551228 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a9e-account-create-update-fzhw8" event={"ID":"4c910eba-ce23-4fd9-b08a-54b96fe6a2da","Type":"ContainerDied","Data":"8ac6484a77ece8a11d14d59104b361e660535022ac1b3f3359289cdf598c1ea3"} Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.553347 4699 generic.go:334] "Generic (PLEG): container finished" podID="5c9e36d9-5d53-46d8-a91a-22dc9338ab58" containerID="f9bc95d14d4ca0f4150bed4b727cc55b90093e4c3307ebc23256f5bd6248badb" exitCode=0 Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.553400 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4fx8g" event={"ID":"5c9e36d9-5d53-46d8-a91a-22dc9338ab58","Type":"ContainerDied","Data":"f9bc95d14d4ca0f4150bed4b727cc55b90093e4c3307ebc23256f5bd6248badb"} Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.557327 4699 generic.go:334] "Generic (PLEG): container finished" podID="1029eddb-2336-4ec5-af4a-b8fed82d3d55" containerID="7c9888c6347c41b14207598f1324ae87027fe21cf208ac04db043c3350762dde" exitCode=0 Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.557402 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bl9wp" event={"ID":"1029eddb-2336-4ec5-af4a-b8fed82d3d55","Type":"ContainerDied","Data":"7c9888c6347c41b14207598f1324ae87027fe21cf208ac04db043c3350762dde"} Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.558639 4699 generic.go:334] "Generic (PLEG): container finished" podID="758bbe1c-d826-47f7-aff6-54e9fc4ebe63" containerID="6a7d35b314cb71b7aea626b804eac24b58050ec797d6079e6362282e3f1a7a28" exitCode=0 Feb 26 11:31:06 crc kubenswrapper[4699]: I0226 11:31:06.559469 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v77r5" event={"ID":"758bbe1c-d826-47f7-aff6-54e9fc4ebe63","Type":"ContainerDied","Data":"6a7d35b314cb71b7aea626b804eac24b58050ec797d6079e6362282e3f1a7a28"} Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.028052 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v77r5" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.076213 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4fx8g" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.103675 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bl9wp" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.106738 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.136620 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f3b2-account-create-update-xhgnq" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.150011 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-operator-scripts\") pod \"758bbe1c-d826-47f7-aff6-54e9fc4ebe63\" (UID: \"758bbe1c-d826-47f7-aff6-54e9fc4ebe63\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.150050 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xts5\" (UniqueName: \"kubernetes.io/projected/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-kube-api-access-5xts5\") pod \"758bbe1c-d826-47f7-aff6-54e9fc4ebe63\" (UID: \"758bbe1c-d826-47f7-aff6-54e9fc4ebe63\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.150828 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "758bbe1c-d826-47f7-aff6-54e9fc4ebe63" (UID: "758bbe1c-d826-47f7-aff6-54e9fc4ebe63"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.152419 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a7a2-account-create-update-l2mt4" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.159903 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-kube-api-access-5xts5" (OuterVolumeSpecName: "kube-api-access-5xts5") pod "758bbe1c-d826-47f7-aff6-54e9fc4ebe63" (UID: "758bbe1c-d826-47f7-aff6-54e9fc4ebe63"). InnerVolumeSpecName "kube-api-access-5xts5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.210803 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a9e-account-create-update-fzhw8" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.252966 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-additional-scripts\") pod \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253095 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run-ovn\") pod \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253146 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-scripts\") pod \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253181 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8wvk\" (UniqueName: \"kubernetes.io/projected/7a68fa18-1c49-4d3d-bc5f-75763944d818-kube-api-access-b8wvk\") pod \"7a68fa18-1c49-4d3d-bc5f-75763944d818\" (UID: \"7a68fa18-1c49-4d3d-bc5f-75763944d818\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253260 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1029eddb-2336-4ec5-af4a-b8fed82d3d55-operator-scripts\") pod \"1029eddb-2336-4ec5-af4a-b8fed82d3d55\" (UID: \"1029eddb-2336-4ec5-af4a-b8fed82d3d55\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253288 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vh8z\" (UniqueName: \"kubernetes.io/projected/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-kube-api-access-6vh8z\") pod \"8c25243e-b6d9-40f5-9c3b-31947cf74cc9\" (UID: \"8c25243e-b6d9-40f5-9c3b-31947cf74cc9\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253322 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l6p4\" (UniqueName: \"kubernetes.io/projected/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-kube-api-access-9l6p4\") pod \"5c9e36d9-5d53-46d8-a91a-22dc9338ab58\" (UID: \"5c9e36d9-5d53-46d8-a91a-22dc9338ab58\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253350 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-operator-scripts\") pod \"5c9e36d9-5d53-46d8-a91a-22dc9338ab58\" (UID: \"5c9e36d9-5d53-46d8-a91a-22dc9338ab58\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253386 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-operator-scripts\") pod \"8c25243e-b6d9-40f5-9c3b-31947cf74cc9\" (UID: \"8c25243e-b6d9-40f5-9c3b-31947cf74cc9\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253413 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run\") pod \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253432 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a68fa18-1c49-4d3d-bc5f-75763944d818-operator-scripts\") pod \"7a68fa18-1c49-4d3d-bc5f-75763944d818\" (UID: \"7a68fa18-1c49-4d3d-bc5f-75763944d818\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253459 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-log-ovn\") pod \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253519 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xbdb\" (UniqueName: \"kubernetes.io/projected/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-kube-api-access-2xbdb\") pod \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\" (UID: \"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253543 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftbnl\" (UniqueName: \"kubernetes.io/projected/1029eddb-2336-4ec5-af4a-b8fed82d3d55-kube-api-access-ftbnl\") pod \"1029eddb-2336-4ec5-af4a-b8fed82d3d55\" (UID: \"1029eddb-2336-4ec5-af4a-b8fed82d3d55\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.253855 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f" (UID: "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.254029 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.254043 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xts5\" (UniqueName: \"kubernetes.io/projected/758bbe1c-d826-47f7-aff6-54e9fc4ebe63-kube-api-access-5xts5\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.254055 4699 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.254113 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run" (OuterVolumeSpecName: "var-run") pod "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f" (UID: "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.254464 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c9e36d9-5d53-46d8-a91a-22dc9338ab58" (UID: "5c9e36d9-5d53-46d8-a91a-22dc9338ab58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.254734 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c25243e-b6d9-40f5-9c3b-31947cf74cc9" (UID: "8c25243e-b6d9-40f5-9c3b-31947cf74cc9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.255025 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-scripts" (OuterVolumeSpecName: "scripts") pod "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f" (UID: "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.255038 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1029eddb-2336-4ec5-af4a-b8fed82d3d55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1029eddb-2336-4ec5-af4a-b8fed82d3d55" (UID: "1029eddb-2336-4ec5-af4a-b8fed82d3d55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.255062 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f" (UID: "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.255308 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a68fa18-1c49-4d3d-bc5f-75763944d818-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a68fa18-1c49-4d3d-bc5f-75763944d818" (UID: "7a68fa18-1c49-4d3d-bc5f-75763944d818"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.255672 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f" (UID: "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.256938 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a68fa18-1c49-4d3d-bc5f-75763944d818-kube-api-access-b8wvk" (OuterVolumeSpecName: "kube-api-access-b8wvk") pod "7a68fa18-1c49-4d3d-bc5f-75763944d818" (UID: "7a68fa18-1c49-4d3d-bc5f-75763944d818"). InnerVolumeSpecName "kube-api-access-b8wvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.256972 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1029eddb-2336-4ec5-af4a-b8fed82d3d55-kube-api-access-ftbnl" (OuterVolumeSpecName: "kube-api-access-ftbnl") pod "1029eddb-2336-4ec5-af4a-b8fed82d3d55" (UID: "1029eddb-2336-4ec5-af4a-b8fed82d3d55"). InnerVolumeSpecName "kube-api-access-ftbnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.257604 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-kube-api-access-9l6p4" (OuterVolumeSpecName: "kube-api-access-9l6p4") pod "5c9e36d9-5d53-46d8-a91a-22dc9338ab58" (UID: "5c9e36d9-5d53-46d8-a91a-22dc9338ab58"). InnerVolumeSpecName "kube-api-access-9l6p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.258349 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-kube-api-access-2xbdb" (OuterVolumeSpecName: "kube-api-access-2xbdb") pod "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f" (UID: "46fd8768-dfd3-4bb1-b7c9-f4d803bf829f"). InnerVolumeSpecName "kube-api-access-2xbdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.259076 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-kube-api-access-6vh8z" (OuterVolumeSpecName: "kube-api-access-6vh8z") pod "8c25243e-b6d9-40f5-9c3b-31947cf74cc9" (UID: "8c25243e-b6d9-40f5-9c3b-31947cf74cc9"). InnerVolumeSpecName "kube-api-access-6vh8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.355773 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-operator-scripts\") pod \"4c910eba-ce23-4fd9-b08a-54b96fe6a2da\" (UID: \"4c910eba-ce23-4fd9-b08a-54b96fe6a2da\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.355862 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7spwj\" (UniqueName: \"kubernetes.io/projected/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-kube-api-access-7spwj\") pod \"4c910eba-ce23-4fd9-b08a-54b96fe6a2da\" (UID: \"4c910eba-ce23-4fd9-b08a-54b96fe6a2da\") " Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356404 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c910eba-ce23-4fd9-b08a-54b96fe6a2da" (UID: "4c910eba-ce23-4fd9-b08a-54b96fe6a2da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356438 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356467 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8wvk\" (UniqueName: \"kubernetes.io/projected/7a68fa18-1c49-4d3d-bc5f-75763944d818-kube-api-access-b8wvk\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356487 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1029eddb-2336-4ec5-af4a-b8fed82d3d55-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356499 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vh8z\" (UniqueName: \"kubernetes.io/projected/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-kube-api-access-6vh8z\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356511 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l6p4\" (UniqueName: \"kubernetes.io/projected/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-kube-api-access-9l6p4\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356523 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e36d9-5d53-46d8-a91a-22dc9338ab58-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356535 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c25243e-b6d9-40f5-9c3b-31947cf74cc9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356547 4699 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356559 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a68fa18-1c49-4d3d-bc5f-75763944d818-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356573 4699 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356585 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xbdb\" (UniqueName: \"kubernetes.io/projected/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-kube-api-access-2xbdb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356598 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftbnl\" (UniqueName: \"kubernetes.io/projected/1029eddb-2336-4ec5-af4a-b8fed82d3d55-kube-api-access-ftbnl\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.356610 4699 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.359103 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-kube-api-access-7spwj" (OuterVolumeSpecName: "kube-api-access-7spwj") pod "4c910eba-ce23-4fd9-b08a-54b96fe6a2da" (UID: "4c910eba-ce23-4fd9-b08a-54b96fe6a2da"). InnerVolumeSpecName "kube-api-access-7spwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.457802 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.457849 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7spwj\" (UniqueName: \"kubernetes.io/projected/4c910eba-ce23-4fd9-b08a-54b96fe6a2da-kube-api-access-7spwj\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.610485 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bl9wp" event={"ID":"1029eddb-2336-4ec5-af4a-b8fed82d3d55","Type":"ContainerDied","Data":"681ac56ff402c4cccc4c7060f9f4f090a2936d815a9584b9ec075bbc793e90c8"} Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.610756 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="681ac56ff402c4cccc4c7060f9f4f090a2936d815a9584b9ec075bbc793e90c8" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.610653 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bl9wp" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.614573 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v9z8k" event={"ID":"7f040612-306e-4ce2-b289-ed5be7bbc9e3","Type":"ContainerStarted","Data":"5b4e9b46d7abb3978f9445cbfeebb825f9cd664cf115705fdae6f65a2a171de8"} Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.617268 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v77r5" event={"ID":"758bbe1c-d826-47f7-aff6-54e9fc4ebe63","Type":"ContainerDied","Data":"9e7dbf7b2fb001aa6400d97b6a5c91f4d444c0f906552e3eb3f37bb196932e99"} Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.617295 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e7dbf7b2fb001aa6400d97b6a5c91f4d444c0f906552e3eb3f37bb196932e99" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.617323 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v77r5" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.619054 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f3b2-account-create-update-xhgnq" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.619080 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f3b2-account-create-update-xhgnq" event={"ID":"8c25243e-b6d9-40f5-9c3b-31947cf74cc9","Type":"ContainerDied","Data":"1e068fdc7bd3850e6a731714657c56632475a810f22a5b66f44175ff08b5063d"} Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.619107 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e068fdc7bd3850e6a731714657c56632475a810f22a5b66f44175ff08b5063d" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.620582 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a7a2-account-create-update-l2mt4" event={"ID":"7a68fa18-1c49-4d3d-bc5f-75763944d818","Type":"ContainerDied","Data":"decc5b383a239243923da333848dd63209b18fa929f715b34f5afb37b483d9dc"} Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.620610 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="decc5b383a239243923da333848dd63209b18fa929f715b34f5afb37b483d9dc" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.620660 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a7a2-account-create-update-l2mt4" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.623333 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nrvng-config-gbwh4" event={"ID":"46fd8768-dfd3-4bb1-b7c9-f4d803bf829f","Type":"ContainerDied","Data":"343bd3b7a46be415742ed56cacad4cbc06c91d65f0c4265fc8e92491ddf876ad"} Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.623371 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="343bd3b7a46be415742ed56cacad4cbc06c91d65f0c4265fc8e92491ddf876ad" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.623577 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nrvng-config-gbwh4" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.625145 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a9e-account-create-update-fzhw8" event={"ID":"4c910eba-ce23-4fd9-b08a-54b96fe6a2da","Type":"ContainerDied","Data":"ccd60b62a46256153e5613142403578ee18ee15936b92911827adb6661d9a59f"} Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.625205 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a9e-account-create-update-fzhw8" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.625180 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccd60b62a46256153e5613142403578ee18ee15936b92911827adb6661d9a59f" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.642700 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4fx8g" event={"ID":"5c9e36d9-5d53-46d8-a91a-22dc9338ab58","Type":"ContainerDied","Data":"cc8acab0f309ee31e4dd19e0655a0a1330830aac52464ec3c2b6fa9110dbc2ad"} Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.642744 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc8acab0f309ee31e4dd19e0655a0a1330830aac52464ec3c2b6fa9110dbc2ad" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.642832 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4fx8g" Feb 26 11:31:10 crc kubenswrapper[4699]: I0226 11:31:10.644569 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-v9z8k" podStartSLOduration=3.038721853 podStartE2EDuration="7.644542241s" podCreationTimestamp="2026-02-26 11:31:03 +0000 UTC" firstStartedPulling="2026-02-26 11:31:05.348335976 +0000 UTC m=+1211.159162410" lastFinishedPulling="2026-02-26 11:31:09.954156364 +0000 UTC m=+1215.764982798" observedRunningTime="2026-02-26 11:31:10.636479699 +0000 UTC m=+1216.447306133" watchObservedRunningTime="2026-02-26 11:31:10.644542241 +0000 UTC m=+1216.455368675" Feb 26 11:31:11 crc kubenswrapper[4699]: I0226 11:31:11.220293 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nrvng-config-gbwh4"] Feb 26 11:31:11 crc kubenswrapper[4699]: I0226 11:31:11.227184 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nrvng-config-gbwh4"] Feb 26 11:31:11 crc kubenswrapper[4699]: I0226 11:31:11.585143 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:31:11 crc kubenswrapper[4699]: I0226 11:31:11.585203 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:31:11 crc kubenswrapper[4699]: I0226 11:31:11.585251 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:31:11 crc kubenswrapper[4699]: I0226 11:31:11.585954 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c2d25c558a927e58d9962b6f55de97dac3f222cb5bc89a35791fca832759b03"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:31:11 crc kubenswrapper[4699]: I0226 11:31:11.586019 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://2c2d25c558a927e58d9962b6f55de97dac3f222cb5bc89a35791fca832759b03" gracePeriod=600 Feb 26 11:31:12 crc kubenswrapper[4699]: I0226 11:31:12.271979 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46fd8768-dfd3-4bb1-b7c9-f4d803bf829f" path="/var/lib/kubelet/pods/46fd8768-dfd3-4bb1-b7c9-f4d803bf829f/volumes" Feb 26 11:31:12 crc kubenswrapper[4699]: I0226 11:31:12.659554 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="2c2d25c558a927e58d9962b6f55de97dac3f222cb5bc89a35791fca832759b03" exitCode=0 Feb 26 11:31:12 crc kubenswrapper[4699]: I0226 11:31:12.659612 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"2c2d25c558a927e58d9962b6f55de97dac3f222cb5bc89a35791fca832759b03"} Feb 26 11:31:12 crc kubenswrapper[4699]: I0226 11:31:12.659652 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"e281597aa593fa5c9ddd67a617de4ed4d3363a8c5b9ebcaaf78cd70cd013eef6"} Feb 26 11:31:12 crc kubenswrapper[4699]: I0226 11:31:12.659673 4699 scope.go:117] "RemoveContainer" containerID="119837a96f7eb017f5f7e56268e9cf0e4a17276f8f8dd21ae8a57f4864ea4cf7" Feb 26 11:31:13 crc kubenswrapper[4699]: I0226 11:31:13.669841 4699 generic.go:334] "Generic (PLEG): container finished" podID="72c1d656-4f85-483b-b7a2-6132b71ae093" containerID="99b2baa30a79cd9b1afa4299366118e58d2c6c18512f6454267d08d3b636f3e6" exitCode=0 Feb 26 11:31:13 crc kubenswrapper[4699]: I0226 11:31:13.669997 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nblvp" event={"ID":"72c1d656-4f85-483b-b7a2-6132b71ae093","Type":"ContainerDied","Data":"99b2baa30a79cd9b1afa4299366118e58d2c6c18512f6454267d08d3b636f3e6"} Feb 26 11:31:13 crc kubenswrapper[4699]: I0226 11:31:13.671859 4699 generic.go:334] "Generic (PLEG): container finished" podID="7f040612-306e-4ce2-b289-ed5be7bbc9e3" containerID="5b4e9b46d7abb3978f9445cbfeebb825f9cd664cf115705fdae6f65a2a171de8" exitCode=0 Feb 26 11:31:13 crc kubenswrapper[4699]: I0226 11:31:13.671922 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v9z8k" event={"ID":"7f040612-306e-4ce2-b289-ed5be7bbc9e3","Type":"ContainerDied","Data":"5b4e9b46d7abb3978f9445cbfeebb825f9cd664cf115705fdae6f65a2a171de8"} Feb 26 11:31:14 crc kubenswrapper[4699]: I0226 11:31:14.985616 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.131547 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsndg\" (UniqueName: \"kubernetes.io/projected/7f040612-306e-4ce2-b289-ed5be7bbc9e3-kube-api-access-hsndg\") pod \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.131976 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-combined-ca-bundle\") pod \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.132017 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-config-data\") pod \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\" (UID: \"7f040612-306e-4ce2-b289-ed5be7bbc9e3\") " Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.137620 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f040612-306e-4ce2-b289-ed5be7bbc9e3-kube-api-access-hsndg" (OuterVolumeSpecName: "kube-api-access-hsndg") pod "7f040612-306e-4ce2-b289-ed5be7bbc9e3" (UID: "7f040612-306e-4ce2-b289-ed5be7bbc9e3"). InnerVolumeSpecName "kube-api-access-hsndg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.154022 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f040612-306e-4ce2-b289-ed5be7bbc9e3" (UID: "7f040612-306e-4ce2-b289-ed5be7bbc9e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.205063 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-config-data" (OuterVolumeSpecName: "config-data") pod "7f040612-306e-4ce2-b289-ed5be7bbc9e3" (UID: "7f040612-306e-4ce2-b289-ed5be7bbc9e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.235051 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.235076 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f040612-306e-4ce2-b289-ed5be7bbc9e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.235105 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsndg\" (UniqueName: \"kubernetes.io/projected/7f040612-306e-4ce2-b289-ed5be7bbc9e3-kube-api-access-hsndg\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.261488 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nblvp" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.335676 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbj5f\" (UniqueName: \"kubernetes.io/projected/72c1d656-4f85-483b-b7a2-6132b71ae093-kube-api-access-vbj5f\") pod \"72c1d656-4f85-483b-b7a2-6132b71ae093\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.335777 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-config-data\") pod \"72c1d656-4f85-483b-b7a2-6132b71ae093\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.335818 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-combined-ca-bundle\") pod \"72c1d656-4f85-483b-b7a2-6132b71ae093\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.336371 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-db-sync-config-data\") pod \"72c1d656-4f85-483b-b7a2-6132b71ae093\" (UID: \"72c1d656-4f85-483b-b7a2-6132b71ae093\") " Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.339596 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "72c1d656-4f85-483b-b7a2-6132b71ae093" (UID: "72c1d656-4f85-483b-b7a2-6132b71ae093"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.339784 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c1d656-4f85-483b-b7a2-6132b71ae093-kube-api-access-vbj5f" (OuterVolumeSpecName: "kube-api-access-vbj5f") pod "72c1d656-4f85-483b-b7a2-6132b71ae093" (UID: "72c1d656-4f85-483b-b7a2-6132b71ae093"). InnerVolumeSpecName "kube-api-access-vbj5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.358100 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72c1d656-4f85-483b-b7a2-6132b71ae093" (UID: "72c1d656-4f85-483b-b7a2-6132b71ae093"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.379116 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-config-data" (OuterVolumeSpecName: "config-data") pod "72c1d656-4f85-483b-b7a2-6132b71ae093" (UID: "72c1d656-4f85-483b-b7a2-6132b71ae093"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.438092 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbj5f\" (UniqueName: \"kubernetes.io/projected/72c1d656-4f85-483b-b7a2-6132b71ae093-kube-api-access-vbj5f\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.438143 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.438154 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.438164 4699 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c1d656-4f85-483b-b7a2-6132b71ae093-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.693275 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v9z8k" event={"ID":"7f040612-306e-4ce2-b289-ed5be7bbc9e3","Type":"ContainerDied","Data":"8bd38e2addb3bddd3fc1ffb0bcdd8462f8cbea6db1e2b7de24e31f6134f41fd6"} Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.693313 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v9z8k" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.693327 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bd38e2addb3bddd3fc1ffb0bcdd8462f8cbea6db1e2b7de24e31f6134f41fd6" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.701847 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nblvp" event={"ID":"72c1d656-4f85-483b-b7a2-6132b71ae093","Type":"ContainerDied","Data":"c8f798d0cd617868616cba798cc6a29d56d3b9e5026ef1c6b84fc3016e7bc40e"} Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.701901 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8f798d0cd617868616cba798cc6a29d56d3b9e5026ef1c6b84fc3016e7bc40e" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.701930 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nblvp" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.957518 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-4bmfl"] Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964441 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c1d656-4f85-483b-b7a2-6132b71ae093" containerName="glance-db-sync" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964473 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c1d656-4f85-483b-b7a2-6132b71ae093" containerName="glance-db-sync" Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964493 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758bbe1c-d826-47f7-aff6-54e9fc4ebe63" containerName="mariadb-database-create" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964504 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="758bbe1c-d826-47f7-aff6-54e9fc4ebe63" containerName="mariadb-database-create" Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964520 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c102f5c-cbaf-429e-b487-8b179f989720" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964528 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c102f5c-cbaf-429e-b487-8b179f989720" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964544 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1029eddb-2336-4ec5-af4a-b8fed82d3d55" containerName="mariadb-database-create" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964554 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="1029eddb-2336-4ec5-af4a-b8fed82d3d55" containerName="mariadb-database-create" Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964570 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c25243e-b6d9-40f5-9c3b-31947cf74cc9" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964579 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c25243e-b6d9-40f5-9c3b-31947cf74cc9" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964590 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f040612-306e-4ce2-b289-ed5be7bbc9e3" containerName="keystone-db-sync" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964597 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f040612-306e-4ce2-b289-ed5be7bbc9e3" containerName="keystone-db-sync" Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964614 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a166832-199a-436c-85a2-4ccde527f180" containerName="init" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964622 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a166832-199a-436c-85a2-4ccde527f180" containerName="init" Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964631 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c910eba-ce23-4fd9-b08a-54b96fe6a2da" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964639 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c910eba-ce23-4fd9-b08a-54b96fe6a2da" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964646 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd8768-dfd3-4bb1-b7c9-f4d803bf829f" containerName="ovn-config" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964654 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd8768-dfd3-4bb1-b7c9-f4d803bf829f" containerName="ovn-config" Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964667 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9e36d9-5d53-46d8-a91a-22dc9338ab58" containerName="mariadb-database-create" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964674 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9e36d9-5d53-46d8-a91a-22dc9338ab58" containerName="mariadb-database-create" Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964688 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a68fa18-1c49-4d3d-bc5f-75763944d818" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964698 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a68fa18-1c49-4d3d-bc5f-75763944d818" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: E0226 11:31:15.964716 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a166832-199a-436c-85a2-4ccde527f180" containerName="dnsmasq-dns" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964723 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a166832-199a-436c-85a2-4ccde527f180" containerName="dnsmasq-dns" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964933 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="1029eddb-2336-4ec5-af4a-b8fed82d3d55" containerName="mariadb-database-create" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964949 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="758bbe1c-d826-47f7-aff6-54e9fc4ebe63" containerName="mariadb-database-create" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964961 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9e36d9-5d53-46d8-a91a-22dc9338ab58" containerName="mariadb-database-create" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964981 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c1d656-4f85-483b-b7a2-6132b71ae093" containerName="glance-db-sync" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.964990 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a68fa18-1c49-4d3d-bc5f-75763944d818" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.965001 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c102f5c-cbaf-429e-b487-8b179f989720" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.965009 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c25243e-b6d9-40f5-9c3b-31947cf74cc9" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.965019 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c910eba-ce23-4fd9-b08a-54b96fe6a2da" containerName="mariadb-account-create-update" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.965028 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f040612-306e-4ce2-b289-ed5be7bbc9e3" containerName="keystone-db-sync" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.965037 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a166832-199a-436c-85a2-4ccde527f180" containerName="dnsmasq-dns" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.965050 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fd8768-dfd3-4bb1-b7c9-f4d803bf829f" containerName="ovn-config" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.966215 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:15 crc kubenswrapper[4699]: I0226 11:31:15.996205 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-4bmfl"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.009460 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rx6w7"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.010705 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.039250 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.039380 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbntt" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.039451 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.039615 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.039688 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.098693 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rx6w7"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152371 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w8wd\" (UniqueName: \"kubernetes.io/projected/833927c0-710f-446e-a3be-0df2b2399638-kube-api-access-9w8wd\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152455 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152495 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152526 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-scripts\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152560 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-config-data\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152600 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-combined-ca-bundle\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152701 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152758 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152787 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-config\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152860 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgf56\" (UniqueName: \"kubernetes.io/projected/7fc97c22-3dce-4a90-bd78-d976a368e56c-kube-api-access-dgf56\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152895 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-credential-keys\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.152932 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-fernet-keys\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.227052 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-db87b77d9-ns48f"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.228885 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.234623 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.234914 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.235165 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.236825 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-84wm7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.257766 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w8wd\" (UniqueName: \"kubernetes.io/projected/833927c0-710f-446e-a3be-0df2b2399638-kube-api-access-9w8wd\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.257813 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.257838 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.257853 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-scripts\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.257867 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-config-data\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.257886 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-combined-ca-bundle\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.257944 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.257972 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.257989 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-config\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.258030 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgf56\" (UniqueName: \"kubernetes.io/projected/7fc97c22-3dce-4a90-bd78-d976a368e56c-kube-api-access-dgf56\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.258050 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-credential-keys\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.258069 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-fernet-keys\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.265964 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-fernet-keys\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.266638 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.267194 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.267680 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-config\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.268266 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.257765 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-db87b77d9-ns48f"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.274572 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-scripts\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.275715 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.281527 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-credential-keys\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.285847 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-combined-ca-bundle\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.286391 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-config-data\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.292643 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-f49xd"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.300145 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w8wd\" (UniqueName: \"kubernetes.io/projected/833927c0-710f-446e-a3be-0df2b2399638-kube-api-access-9w8wd\") pod \"keystone-bootstrap-rx6w7\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.301877 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.312174 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgf56\" (UniqueName: \"kubernetes.io/projected/7fc97c22-3dce-4a90-bd78-d976a368e56c-kube-api-access-dgf56\") pod \"dnsmasq-dns-6f8c45789f-4bmfl\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.326020 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.326239 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.326262 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bgvh2" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.339419 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-f49xd"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.359006 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k4xk\" (UniqueName: \"kubernetes.io/projected/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-kube-api-access-8k4xk\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.359054 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-logs\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.359083 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-config-data\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.359156 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-horizon-secret-key\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.359218 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-scripts\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.367388 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.369320 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.376795 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-4bmfl"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.377853 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.384294 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.384529 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.386724 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.411633 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.436437 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-dr78q"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.437598 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.445391 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.445632 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.445782 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xrfkn" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460450 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-scripts\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460486 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-horizon-secret-key\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460516 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8426fd89-9eba-46fa-8611-e98cc7636b41-etc-machine-id\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460550 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-scripts\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460576 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-combined-ca-bundle\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460604 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-scripts\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460634 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-log-httpd\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460659 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-config-data\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460696 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460714 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-config-data\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460740 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k4xk\" (UniqueName: \"kubernetes.io/projected/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-kube-api-access-8k4xk\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460761 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-logs\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460786 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-config-data\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460816 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srl4m\" (UniqueName: \"kubernetes.io/projected/7cec2d73-9ca8-4a8b-836d-efce961fbde8-kube-api-access-srl4m\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460850 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-run-httpd\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460877 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-db-sync-config-data\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460907 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.460926 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr9sd\" (UniqueName: \"kubernetes.io/projected/8426fd89-9eba-46fa-8611-e98cc7636b41-kube-api-access-mr9sd\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.464990 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-scripts\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.465141 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-logs\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.466096 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-config-data\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.476104 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-horizon-secret-key\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.503863 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-xbnb6"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.505322 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.507755 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k4xk\" (UniqueName: \"kubernetes.io/projected/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-kube-api-access-8k4xk\") pod \"horizon-db87b77d9-ns48f\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.533184 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7g59c"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.534274 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.553540 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zs6cf" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.553657 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f6f7dcd75-m9jm6"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.553725 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.557452 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.562897 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srl4m\" (UniqueName: \"kubernetes.io/projected/7cec2d73-9ca8-4a8b-836d-efce961fbde8-kube-api-access-srl4m\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.562954 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-config\") pod \"neutron-db-sync-dr78q\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.562974 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-run-httpd\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.562995 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-db-sync-config-data\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563018 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9sd\" (UniqueName: \"kubernetes.io/projected/8426fd89-9eba-46fa-8611-e98cc7636b41-kube-api-access-mr9sd\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563035 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563060 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-scripts\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563081 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8426fd89-9eba-46fa-8611-e98cc7636b41-etc-machine-id\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563103 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-combined-ca-bundle\") pod \"neutron-db-sync-dr78q\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563149 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-scripts\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563172 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-combined-ca-bundle\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563203 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-log-httpd\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563229 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-config-data\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563258 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqk94\" (UniqueName: \"kubernetes.io/projected/ae813248-510e-4b19-bcd8-39cefca6cd37-kube-api-access-pqk94\") pod \"neutron-db-sync-dr78q\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563278 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.563296 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-config-data\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.569321 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8426fd89-9eba-46fa-8611-e98cc7636b41-etc-machine-id\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.569802 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-log-httpd\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.570063 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-run-httpd\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.570512 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dr78q"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.576189 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.578343 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7g59c"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.587969 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.588175 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-db-sync-config-data\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.588516 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-config-data\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.588909 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f6f7dcd75-m9jm6"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.590018 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-config-data\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.590563 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-scripts\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.594540 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.600502 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-scripts\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.607586 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-combined-ca-bundle\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.607616 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srl4m\" (UniqueName: \"kubernetes.io/projected/7cec2d73-9ca8-4a8b-836d-efce961fbde8-kube-api-access-srl4m\") pod \"ceilometer-0\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.620474 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9sd\" (UniqueName: \"kubernetes.io/projected/8426fd89-9eba-46fa-8611-e98cc7636b41-kube-api-access-mr9sd\") pod \"cinder-db-sync-f49xd\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.621853 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-xbnb6"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.638580 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-z6w9z"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.639848 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.649650 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2ghn5" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.649839 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.652708 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673382 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-db-sync-config-data\") pod \"barbican-db-sync-7g59c\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673433 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-config\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673462 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673485 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673515 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqk94\" (UniqueName: \"kubernetes.io/projected/ae813248-510e-4b19-bcd8-39cefca6cd37-kube-api-access-pqk94\") pod \"neutron-db-sync-dr78q\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673533 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-scripts\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673557 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n82h\" (UniqueName: \"kubernetes.io/projected/41ed545b-f613-4408-bd1c-df5a09432e39-kube-api-access-7n82h\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673578 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673598 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673624 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvm74\" (UniqueName: \"kubernetes.io/projected/e1315502-3c1c-4d70-b105-d31a6e2fe754-kube-api-access-tvm74\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673647 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41ed545b-f613-4408-bd1c-df5a09432e39-horizon-secret-key\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673664 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-config\") pod \"neutron-db-sync-dr78q\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673684 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-config-data\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673713 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-combined-ca-bundle\") pod \"barbican-db-sync-7g59c\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673748 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-combined-ca-bundle\") pod \"neutron-db-sync-dr78q\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673773 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ed545b-f613-4408-bd1c-df5a09432e39-logs\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.673796 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmccd\" (UniqueName: \"kubernetes.io/projected/d45d20cb-c561-4b84-b327-9b096865e8bb-kube-api-access-xmccd\") pod \"barbican-db-sync-7g59c\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.680021 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-config\") pod \"neutron-db-sync-dr78q\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.687847 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-combined-ca-bundle\") pod \"neutron-db-sync-dr78q\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.706200 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-z6w9z"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.707508 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f49xd" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.714751 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqk94\" (UniqueName: \"kubernetes.io/projected/ae813248-510e-4b19-bcd8-39cefca6cd37-kube-api-access-pqk94\") pod \"neutron-db-sync-dr78q\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.715540 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.733930 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-xbnb6"] Feb 26 11:31:16 crc kubenswrapper[4699]: E0226 11:31:16.734782 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-tvm74 ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" podUID="e1315502-3c1c-4d70-b105-d31a6e2fe754" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.755378 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.778472 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-db-sync-config-data\") pod \"barbican-db-sync-7g59c\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.778514 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-scripts\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.778547 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-config\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.778579 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.778599 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.778622 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-combined-ca-bundle\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.778658 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-scripts\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.778692 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n82h\" (UniqueName: \"kubernetes.io/projected/41ed545b-f613-4408-bd1c-df5a09432e39-kube-api-access-7n82h\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.778717 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.778739 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.778769 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvm74\" (UniqueName: \"kubernetes.io/projected/e1315502-3c1c-4d70-b105-d31a6e2fe754-kube-api-access-tvm74\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.780185 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41ed545b-f613-4408-bd1c-df5a09432e39-horizon-secret-key\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.780221 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-config-data\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.780318 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99w54\" (UniqueName: \"kubernetes.io/projected/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-kube-api-access-99w54\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.780510 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-combined-ca-bundle\") pod \"barbican-db-sync-7g59c\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.780561 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-config-data\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.780613 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ed545b-f613-4408-bd1c-df5a09432e39-logs\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.780632 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-logs\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.780667 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmccd\" (UniqueName: \"kubernetes.io/projected/d45d20cb-c561-4b84-b327-9b096865e8bb-kube-api-access-xmccd\") pod \"barbican-db-sync-7g59c\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.781776 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.784545 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-scripts\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.787350 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.789586 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.789854 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.789966 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-db-sync-config-data\") pod \"barbican-db-sync-7g59c\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.790145 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ed545b-f613-4408-bd1c-df5a09432e39-logs\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.790976 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-config-data\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.792642 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-config\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.798488 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.800404 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.803391 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.803670 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j4q6c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.806979 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.810747 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41ed545b-f613-4408-bd1c-df5a09432e39-horizon-secret-key\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.812387 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.819786 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmccd\" (UniqueName: \"kubernetes.io/projected/d45d20cb-c561-4b84-b327-9b096865e8bb-kube-api-access-xmccd\") pod \"barbican-db-sync-7g59c\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.820286 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n82h\" (UniqueName: \"kubernetes.io/projected/41ed545b-f613-4408-bd1c-df5a09432e39-kube-api-access-7n82h\") pod \"horizon-7f6f7dcd75-m9jm6\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.840412 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-combined-ca-bundle\") pod \"barbican-db-sync-7g59c\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.861109 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvm74\" (UniqueName: \"kubernetes.io/projected/e1315502-3c1c-4d70-b105-d31a6e2fe754-kube-api-access-tvm74\") pod \"dnsmasq-dns-6c9c9f998c-xbnb6\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.876335 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-nhzhh"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.877774 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.882237 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztqww\" (UniqueName: \"kubernetes.io/projected/630dc7fb-8bb5-4136-accd-eb460ad0e940-kube-api-access-ztqww\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.882457 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-logs\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.882609 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-scripts\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.882705 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-logs\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.882845 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-config-data\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.882958 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-combined-ca-bundle\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.883304 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-scripts\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.885598 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-logs\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.888792 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-nhzhh"] Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.889980 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99w54\" (UniqueName: \"kubernetes.io/projected/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-kube-api-access-99w54\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.891591 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.892593 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-config-data\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.891616 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-combined-ca-bundle\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.890187 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-scripts\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.890582 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.894319 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.899943 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.898709 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-config-data\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.916689 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:16 crc kubenswrapper[4699]: I0226 11:31:16.929141 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99w54\" (UniqueName: \"kubernetes.io/projected/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-kube-api-access-99w54\") pod \"placement-db-sync-z6w9z\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.001770 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.002219 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.003343 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.003492 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.003608 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.003915 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztqww\" (UniqueName: \"kubernetes.io/projected/630dc7fb-8bb5-4136-accd-eb460ad0e940-kube-api-access-ztqww\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.004247 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.003994 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.005057 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.005363 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rj6\" (UniqueName: \"kubernetes.io/projected/81843e2c-774f-402a-bd90-c4485ab24c05-kube-api-access-z2rj6\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.005517 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-config\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.005615 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-logs\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.005711 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-config-data\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.005828 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.006357 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.007256 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-scripts\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.007006 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-logs\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.010875 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.016732 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-scripts\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.021228 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-config-data\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.034345 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztqww\" (UniqueName: \"kubernetes.io/projected/630dc7fb-8bb5-4136-accd-eb460ad0e940-kube-api-access-ztqww\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.073706 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.110427 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.110496 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.110513 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2rj6\" (UniqueName: \"kubernetes.io/projected/81843e2c-774f-402a-bd90-c4485ab24c05-kube-api-access-z2rj6\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.110544 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-config\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.110584 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.110599 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.111782 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.112068 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.112185 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.112720 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.114765 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-config\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.154516 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.169679 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2rj6\" (UniqueName: \"kubernetes.io/projected/81843e2c-774f-402a-bd90-c4485ab24c05-kube-api-access-z2rj6\") pod \"dnsmasq-dns-57c957c4ff-nhzhh\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: W0226 11:31:17.240184 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fc97c22_3dce_4a90_bd78_d976a368e56c.slice/crio-9f0340f743584c76493adaa36d5d0b55236ad93a692317a2e0b7aec568cbee52 WatchSource:0}: Error finding container 9f0340f743584c76493adaa36d5d0b55236ad93a692317a2e0b7aec568cbee52: Status 404 returned error can't find the container with id 9f0340f743584c76493adaa36d5d0b55236ad93a692317a2e0b7aec568cbee52 Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.260513 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rx6w7"] Feb 26 11:31:17 crc kubenswrapper[4699]: W0226 11:31:17.278947 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod833927c0_710f_446e_a3be_0df2b2399638.slice/crio-654ff57a8dfa79f3d0a638f1486da08c6ebdf1d037f74a27a5ccdbee84215eb5 WatchSource:0}: Error finding container 654ff57a8dfa79f3d0a638f1486da08c6ebdf1d037f74a27a5ccdbee84215eb5: Status 404 returned error can't find the container with id 654ff57a8dfa79f3d0a638f1486da08c6ebdf1d037f74a27a5ccdbee84215eb5 Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.279005 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-4bmfl"] Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.407240 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.410456 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.415590 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.421751 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.424173 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.520763 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.525418 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.525484 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.525541 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.525564 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.525593 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.525619 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.525696 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xwxg\" (UniqueName: \"kubernetes.io/projected/9c24335e-75be-481e-b1c8-631913d074ee-kube-api-access-6xwxg\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.539733 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-db87b77d9-ns48f"] Feb 26 11:31:17 crc kubenswrapper[4699]: W0226 11:31:17.550261 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9cf42d8_ed15_44dd_aaed_fbffa29417c4.slice/crio-1e699d538cdebfe589c475a344848f907766835460184b9e330ed614ebb6483c WatchSource:0}: Error finding container 1e699d538cdebfe589c475a344848f907766835460184b9e330ed614ebb6483c: Status 404 returned error can't find the container with id 1e699d538cdebfe589c475a344848f907766835460184b9e330ed614ebb6483c Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.631182 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.631235 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.631270 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.631294 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.631367 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xwxg\" (UniqueName: \"kubernetes.io/projected/9c24335e-75be-481e-b1c8-631913d074ee-kube-api-access-6xwxg\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.631406 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.631436 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.631866 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.632110 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.638381 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.642492 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.643604 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.660189 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.687484 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xwxg\" (UniqueName: \"kubernetes.io/projected/9c24335e-75be-481e-b1c8-631913d074ee-kube-api-access-6xwxg\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.734353 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.760207 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dr78q"] Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.778652 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-f49xd"] Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.785195 4699 generic.go:334] "Generic (PLEG): container finished" podID="7fc97c22-3dce-4a90-bd78-d976a368e56c" containerID="e826d5a3469d5b15dd136a91917fe06077ab83ca45f565c094384a045ed23b99" exitCode=0 Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.785251 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" event={"ID":"7fc97c22-3dce-4a90-bd78-d976a368e56c","Type":"ContainerDied","Data":"e826d5a3469d5b15dd136a91917fe06077ab83ca45f565c094384a045ed23b99"} Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.785276 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" event={"ID":"7fc97c22-3dce-4a90-bd78-d976a368e56c","Type":"ContainerStarted","Data":"9f0340f743584c76493adaa36d5d0b55236ad93a692317a2e0b7aec568cbee52"} Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.787980 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7g59c"] Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.826344 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rx6w7" event={"ID":"833927c0-710f-446e-a3be-0df2b2399638","Type":"ContainerStarted","Data":"5822866374c533954891aab83b4e82e6518ecfafe343985ba49ddc3abdfd00dc"} Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.826398 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rx6w7" event={"ID":"833927c0-710f-446e-a3be-0df2b2399638","Type":"ContainerStarted","Data":"654ff57a8dfa79f3d0a638f1486da08c6ebdf1d037f74a27a5ccdbee84215eb5"} Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.853666 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-db87b77d9-ns48f" event={"ID":"c9cf42d8-ed15-44dd-aaed-fbffa29417c4","Type":"ContainerStarted","Data":"1e699d538cdebfe589c475a344848f907766835460184b9e330ed614ebb6483c"} Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.874333 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.874983 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cec2d73-9ca8-4a8b-836d-efce961fbde8","Type":"ContainerStarted","Data":"50c24ca371e65d6a43a9a97ed072f4bd1eadffc6515aa3e571658b4eeec32c3b"} Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.882091 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rx6w7" podStartSLOduration=2.88206837 podStartE2EDuration="2.88206837s" podCreationTimestamp="2026-02-26 11:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:17.875263944 +0000 UTC m=+1223.686090388" watchObservedRunningTime="2026-02-26 11:31:17.88206837 +0000 UTC m=+1223.692894804" Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.918782 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:17 crc kubenswrapper[4699]: W0226 11:31:17.970607 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47a9d008_5b7e_4866_b92b_efcb60cbfdb0.slice/crio-920ac430b947cdc3b32b9b6348a1213ef17636f95a2668e9fab680798b77b616 WatchSource:0}: Error finding container 920ac430b947cdc3b32b9b6348a1213ef17636f95a2668e9fab680798b77b616: Status 404 returned error can't find the container with id 920ac430b947cdc3b32b9b6348a1213ef17636f95a2668e9fab680798b77b616 Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.979719 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f6f7dcd75-m9jm6"] Feb 26 11:31:17 crc kubenswrapper[4699]: I0226 11:31:17.987539 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-z6w9z"] Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.034849 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.052258 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-sb\") pod \"e1315502-3c1c-4d70-b105-d31a6e2fe754\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.052353 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-nb\") pod \"e1315502-3c1c-4d70-b105-d31a6e2fe754\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.052397 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-svc\") pod \"e1315502-3c1c-4d70-b105-d31a6e2fe754\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.052497 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-swift-storage-0\") pod \"e1315502-3c1c-4d70-b105-d31a6e2fe754\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.052602 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-config\") pod \"e1315502-3c1c-4d70-b105-d31a6e2fe754\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.052702 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvm74\" (UniqueName: \"kubernetes.io/projected/e1315502-3c1c-4d70-b105-d31a6e2fe754-kube-api-access-tvm74\") pod \"e1315502-3c1c-4d70-b105-d31a6e2fe754\" (UID: \"e1315502-3c1c-4d70-b105-d31a6e2fe754\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.053801 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1315502-3c1c-4d70-b105-d31a6e2fe754" (UID: "e1315502-3c1c-4d70-b105-d31a6e2fe754"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.053923 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1315502-3c1c-4d70-b105-d31a6e2fe754" (UID: "e1315502-3c1c-4d70-b105-d31a6e2fe754"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.054134 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1315502-3c1c-4d70-b105-d31a6e2fe754" (UID: "e1315502-3c1c-4d70-b105-d31a6e2fe754"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.054358 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-config" (OuterVolumeSpecName: "config") pod "e1315502-3c1c-4d70-b105-d31a6e2fe754" (UID: "e1315502-3c1c-4d70-b105-d31a6e2fe754"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.054519 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e1315502-3c1c-4d70-b105-d31a6e2fe754" (UID: "e1315502-3c1c-4d70-b105-d31a6e2fe754"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.058705 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1315502-3c1c-4d70-b105-d31a6e2fe754-kube-api-access-tvm74" (OuterVolumeSpecName: "kube-api-access-tvm74") pod "e1315502-3c1c-4d70-b105-d31a6e2fe754" (UID: "e1315502-3c1c-4d70-b105-d31a6e2fe754"). InnerVolumeSpecName "kube-api-access-tvm74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.059797 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-nhzhh"] Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.161789 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.161842 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvm74\" (UniqueName: \"kubernetes.io/projected/e1315502-3c1c-4d70-b105-d31a6e2fe754-kube-api-access-tvm74\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.161855 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.161865 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.161876 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.161912 4699 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1315502-3c1c-4d70-b105-d31a6e2fe754-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.314388 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.317777 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.372741 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-config\") pod \"7fc97c22-3dce-4a90-bd78-d976a368e56c\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.372789 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-sb\") pod \"7fc97c22-3dce-4a90-bd78-d976a368e56c\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.372909 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-swift-storage-0\") pod \"7fc97c22-3dce-4a90-bd78-d976a368e56c\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.373003 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgf56\" (UniqueName: \"kubernetes.io/projected/7fc97c22-3dce-4a90-bd78-d976a368e56c-kube-api-access-dgf56\") pod \"7fc97c22-3dce-4a90-bd78-d976a368e56c\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.373020 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-svc\") pod \"7fc97c22-3dce-4a90-bd78-d976a368e56c\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.373082 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-nb\") pod \"7fc97c22-3dce-4a90-bd78-d976a368e56c\" (UID: \"7fc97c22-3dce-4a90-bd78-d976a368e56c\") " Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.393741 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc97c22-3dce-4a90-bd78-d976a368e56c-kube-api-access-dgf56" (OuterVolumeSpecName: "kube-api-access-dgf56") pod "7fc97c22-3dce-4a90-bd78-d976a368e56c" (UID: "7fc97c22-3dce-4a90-bd78-d976a368e56c"). InnerVolumeSpecName "kube-api-access-dgf56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.426396 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7fc97c22-3dce-4a90-bd78-d976a368e56c" (UID: "7fc97c22-3dce-4a90-bd78-d976a368e56c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.431405 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-config" (OuterVolumeSpecName: "config") pod "7fc97c22-3dce-4a90-bd78-d976a368e56c" (UID: "7fc97c22-3dce-4a90-bd78-d976a368e56c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.476047 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgf56\" (UniqueName: \"kubernetes.io/projected/7fc97c22-3dce-4a90-bd78-d976a368e56c-kube-api-access-dgf56\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.476316 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.476333 4699 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.492143 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.516539 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f6f7dcd75-m9jm6"] Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.579512 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d649b895f-2cm8f"] Feb 26 11:31:18 crc kubenswrapper[4699]: E0226 11:31:18.581928 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc97c22-3dce-4a90-bd78-d976a368e56c" containerName="init" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.581961 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc97c22-3dce-4a90-bd78-d976a368e56c" containerName="init" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.582210 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc97c22-3dce-4a90-bd78-d976a368e56c" containerName="init" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.603204 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7fc97c22-3dce-4a90-bd78-d976a368e56c" (UID: "7fc97c22-3dce-4a90-bd78-d976a368e56c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.607426 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.615254 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7fc97c22-3dce-4a90-bd78-d976a368e56c" (UID: "7fc97c22-3dce-4a90-bd78-d976a368e56c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.615724 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7fc97c22-3dce-4a90-bd78-d976a368e56c" (UID: "7fc97c22-3dce-4a90-bd78-d976a368e56c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.643943 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d649b895f-2cm8f"] Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.682266 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.683639 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6628395-d6a6-4719-b0ad-10984c3c172b-horizon-secret-key\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.683706 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6628395-d6a6-4719-b0ad-10984c3c172b-logs\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.683774 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-config-data\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.684828 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfkrb\" (UniqueName: \"kubernetes.io/projected/d6628395-d6a6-4719-b0ad-10984c3c172b-kube-api-access-tfkrb\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.684853 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-scripts\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.684981 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.685002 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.685011 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fc97c22-3dce-4a90-bd78-d976a368e56c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.714668 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.778435 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:18 crc kubenswrapper[4699]: W0226 11:31:18.780357 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c24335e_75be_481e_b1c8_631913d074ee.slice/crio-3505d7f98aca89b6416ea19c75cdca4f118df015fab7cba4bb6ff9fd01c39fa6 WatchSource:0}: Error finding container 3505d7f98aca89b6416ea19c75cdca4f118df015fab7cba4bb6ff9fd01c39fa6: Status 404 returned error can't find the container with id 3505d7f98aca89b6416ea19c75cdca4f118df015fab7cba4bb6ff9fd01c39fa6 Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.788141 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-config-data\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.788191 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfkrb\" (UniqueName: \"kubernetes.io/projected/d6628395-d6a6-4719-b0ad-10984c3c172b-kube-api-access-tfkrb\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.788215 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-scripts\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.788809 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6628395-d6a6-4719-b0ad-10984c3c172b-horizon-secret-key\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.788879 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6628395-d6a6-4719-b0ad-10984c3c172b-logs\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.789224 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6628395-d6a6-4719-b0ad-10984c3c172b-logs\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.789419 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-scripts\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.791512 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-config-data\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.805313 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6628395-d6a6-4719-b0ad-10984c3c172b-horizon-secret-key\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.819252 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfkrb\" (UniqueName: \"kubernetes.io/projected/d6628395-d6a6-4719-b0ad-10984c3c172b-kube-api-access-tfkrb\") pod \"horizon-6d649b895f-2cm8f\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.908414 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z6w9z" event={"ID":"47a9d008-5b7e-4866-b92b-efcb60cbfdb0","Type":"ContainerStarted","Data":"920ac430b947cdc3b32b9b6348a1213ef17636f95a2668e9fab680798b77b616"} Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.910908 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c24335e-75be-481e-b1c8-631913d074ee","Type":"ContainerStarted","Data":"3505d7f98aca89b6416ea19c75cdca4f118df015fab7cba4bb6ff9fd01c39fa6"} Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.913322 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" event={"ID":"7fc97c22-3dce-4a90-bd78-d976a368e56c","Type":"ContainerDied","Data":"9f0340f743584c76493adaa36d5d0b55236ad93a692317a2e0b7aec568cbee52"} Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.913379 4699 scope.go:117] "RemoveContainer" containerID="e826d5a3469d5b15dd136a91917fe06077ab83ca45f565c094384a045ed23b99" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.913507 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-4bmfl" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.927171 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f6f7dcd75-m9jm6" event={"ID":"41ed545b-f613-4408-bd1c-df5a09432e39","Type":"ContainerStarted","Data":"b1308b571f0b2b92fb651e80c640ed1db7c81e3d85041ae47619d6dae7c87aad"} Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.931428 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7g59c" event={"ID":"d45d20cb-c561-4b84-b327-9b096865e8bb","Type":"ContainerStarted","Data":"a8bf2edfe1a0cab1df993c5f3eabf3a6892b72d4d33db983d7476af16ba0c19b"} Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.934709 4699 generic.go:334] "Generic (PLEG): container finished" podID="81843e2c-774f-402a-bd90-c4485ab24c05" containerID="2161a9d96d5b3712e81eaf624a88f2f6f3ee6fc2f0aaa102d1a1b03d768333c4" exitCode=0 Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.934786 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" event={"ID":"81843e2c-774f-402a-bd90-c4485ab24c05","Type":"ContainerDied","Data":"2161a9d96d5b3712e81eaf624a88f2f6f3ee6fc2f0aaa102d1a1b03d768333c4"} Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.934808 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" event={"ID":"81843e2c-774f-402a-bd90-c4485ab24c05","Type":"ContainerStarted","Data":"fed26d1422b55affaace34ac700e5a58aa1d192cab8a88f61c67c7cb3b1ca3ed"} Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.938759 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f49xd" event={"ID":"8426fd89-9eba-46fa-8611-e98cc7636b41","Type":"ContainerStarted","Data":"3e0a4f4a5840bf076a02406c3b220ed5f7a7941a35ea7875a55be88dc0efa11e"} Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.942619 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.943556 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dr78q" event={"ID":"ae813248-510e-4b19-bcd8-39cefca6cd37","Type":"ContainerStarted","Data":"0eab0de6a835999edb566f7a018ef04e992296918bfb17f761cbea8ef8c3775a"} Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.943598 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dr78q" event={"ID":"ae813248-510e-4b19-bcd8-39cefca6cd37","Type":"ContainerStarted","Data":"41ed63c8f69999d16d3b8a0632b0099f90cd743a1b305a70c63928dff741248e"} Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.945874 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-xbnb6" Feb 26 11:31:18 crc kubenswrapper[4699]: I0226 11:31:18.946931 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"630dc7fb-8bb5-4136-accd-eb460ad0e940","Type":"ContainerStarted","Data":"0462d99185a120468341d7f6efeca5ca1d1c779c506ddb0fb105a2de0f655ad5"} Feb 26 11:31:19 crc kubenswrapper[4699]: I0226 11:31:19.009324 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-dr78q" podStartSLOduration=3.009308383 podStartE2EDuration="3.009308383s" podCreationTimestamp="2026-02-26 11:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:18.998276135 +0000 UTC m=+1224.809102569" watchObservedRunningTime="2026-02-26 11:31:19.009308383 +0000 UTC m=+1224.820134817" Feb 26 11:31:19 crc kubenswrapper[4699]: I0226 11:31:19.207306 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-4bmfl"] Feb 26 11:31:19 crc kubenswrapper[4699]: I0226 11:31:19.218334 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-4bmfl"] Feb 26 11:31:19 crc kubenswrapper[4699]: I0226 11:31:19.266706 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-xbnb6"] Feb 26 11:31:19 crc kubenswrapper[4699]: I0226 11:31:19.289805 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-xbnb6"] Feb 26 11:31:19 crc kubenswrapper[4699]: I0226 11:31:19.469891 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d649b895f-2cm8f"] Feb 26 11:31:19 crc kubenswrapper[4699]: W0226 11:31:19.495449 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6628395_d6a6_4719_b0ad_10984c3c172b.slice/crio-cf033b29d548f1e02ed1b1bad110c9d77ffdf16f842c34bb4ebc18230fbed6bf WatchSource:0}: Error finding container cf033b29d548f1e02ed1b1bad110c9d77ffdf16f842c34bb4ebc18230fbed6bf: Status 404 returned error can't find the container with id cf033b29d548f1e02ed1b1bad110c9d77ffdf16f842c34bb4ebc18230fbed6bf Feb 26 11:31:19 crc kubenswrapper[4699]: I0226 11:31:19.958334 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"630dc7fb-8bb5-4136-accd-eb460ad0e940","Type":"ContainerStarted","Data":"790e1bee9a89611157009e82024f95d4afa15834cb397b52f1c9c892d7cb8150"} Feb 26 11:31:19 crc kubenswrapper[4699]: I0226 11:31:19.960766 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" event={"ID":"81843e2c-774f-402a-bd90-c4485ab24c05","Type":"ContainerStarted","Data":"3eda8514ede18fd03dc0849cf95cf8d9b4cb3f130429078ff465a976e2f5421b"} Feb 26 11:31:19 crc kubenswrapper[4699]: I0226 11:31:19.961088 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:19 crc kubenswrapper[4699]: I0226 11:31:19.965964 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d649b895f-2cm8f" event={"ID":"d6628395-d6a6-4719-b0ad-10984c3c172b","Type":"ContainerStarted","Data":"cf033b29d548f1e02ed1b1bad110c9d77ffdf16f842c34bb4ebc18230fbed6bf"} Feb 26 11:31:19 crc kubenswrapper[4699]: I0226 11:31:19.984578 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" podStartSLOduration=3.984522285 podStartE2EDuration="3.984522285s" podCreationTimestamp="2026-02-26 11:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:19.98120908 +0000 UTC m=+1225.792035524" watchObservedRunningTime="2026-02-26 11:31:19.984522285 +0000 UTC m=+1225.795348719" Feb 26 11:31:20 crc kubenswrapper[4699]: I0226 11:31:20.284180 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc97c22-3dce-4a90-bd78-d976a368e56c" path="/var/lib/kubelet/pods/7fc97c22-3dce-4a90-bd78-d976a368e56c/volumes" Feb 26 11:31:20 crc kubenswrapper[4699]: I0226 11:31:20.284686 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1315502-3c1c-4d70-b105-d31a6e2fe754" path="/var/lib/kubelet/pods/e1315502-3c1c-4d70-b105-d31a6e2fe754/volumes" Feb 26 11:31:20 crc kubenswrapper[4699]: I0226 11:31:20.991533 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c24335e-75be-481e-b1c8-631913d074ee","Type":"ContainerStarted","Data":"fcc40c7508a6a00f53ef699bf82940d37acb3bc8e8309bb9b5ea1335e70a77f3"} Feb 26 11:31:22 crc kubenswrapper[4699]: I0226 11:31:22.023755 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"630dc7fb-8bb5-4136-accd-eb460ad0e940","Type":"ContainerStarted","Data":"1784112af5cd06d4ca4320e949f04a24c30b53553bdf95678319674202498461"} Feb 26 11:31:22 crc kubenswrapper[4699]: I0226 11:31:22.023916 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="630dc7fb-8bb5-4136-accd-eb460ad0e940" containerName="glance-log" containerID="cri-o://790e1bee9a89611157009e82024f95d4afa15834cb397b52f1c9c892d7cb8150" gracePeriod=30 Feb 26 11:31:22 crc kubenswrapper[4699]: I0226 11:31:22.024208 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="630dc7fb-8bb5-4136-accd-eb460ad0e940" containerName="glance-httpd" containerID="cri-o://1784112af5cd06d4ca4320e949f04a24c30b53553bdf95678319674202498461" gracePeriod=30 Feb 26 11:31:22 crc kubenswrapper[4699]: I0226 11:31:22.035221 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c24335e-75be-481e-b1c8-631913d074ee","Type":"ContainerStarted","Data":"bd25a0bbaec5054f7454612c57bed5e09bb1f26fa1edc54363bf9ae7af5130e5"} Feb 26 11:31:22 crc kubenswrapper[4699]: I0226 11:31:22.035424 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9c24335e-75be-481e-b1c8-631913d074ee" containerName="glance-log" containerID="cri-o://fcc40c7508a6a00f53ef699bf82940d37acb3bc8e8309bb9b5ea1335e70a77f3" gracePeriod=30 Feb 26 11:31:22 crc kubenswrapper[4699]: I0226 11:31:22.035471 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9c24335e-75be-481e-b1c8-631913d074ee" containerName="glance-httpd" containerID="cri-o://bd25a0bbaec5054f7454612c57bed5e09bb1f26fa1edc54363bf9ae7af5130e5" gracePeriod=30 Feb 26 11:31:22 crc kubenswrapper[4699]: I0226 11:31:22.079266 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.079246538 podStartE2EDuration="6.079246538s" podCreationTimestamp="2026-02-26 11:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:22.077537319 +0000 UTC m=+1227.888363753" watchObservedRunningTime="2026-02-26 11:31:22.079246538 +0000 UTC m=+1227.890072962" Feb 26 11:31:22 crc kubenswrapper[4699]: I0226 11:31:22.079574 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.079566508 podStartE2EDuration="6.079566508s" podCreationTimestamp="2026-02-26 11:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:22.0512526 +0000 UTC m=+1227.862079044" watchObservedRunningTime="2026-02-26 11:31:22.079566508 +0000 UTC m=+1227.890392942" Feb 26 11:31:23 crc kubenswrapper[4699]: I0226 11:31:23.047378 4699 generic.go:334] "Generic (PLEG): container finished" podID="630dc7fb-8bb5-4136-accd-eb460ad0e940" containerID="1784112af5cd06d4ca4320e949f04a24c30b53553bdf95678319674202498461" exitCode=143 Feb 26 11:31:23 crc kubenswrapper[4699]: I0226 11:31:23.047721 4699 generic.go:334] "Generic (PLEG): container finished" podID="630dc7fb-8bb5-4136-accd-eb460ad0e940" containerID="790e1bee9a89611157009e82024f95d4afa15834cb397b52f1c9c892d7cb8150" exitCode=143 Feb 26 11:31:23 crc kubenswrapper[4699]: I0226 11:31:23.047506 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"630dc7fb-8bb5-4136-accd-eb460ad0e940","Type":"ContainerDied","Data":"1784112af5cd06d4ca4320e949f04a24c30b53553bdf95678319674202498461"} Feb 26 11:31:23 crc kubenswrapper[4699]: I0226 11:31:23.047805 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"630dc7fb-8bb5-4136-accd-eb460ad0e940","Type":"ContainerDied","Data":"790e1bee9a89611157009e82024f95d4afa15834cb397b52f1c9c892d7cb8150"} Feb 26 11:31:23 crc kubenswrapper[4699]: I0226 11:31:23.052242 4699 generic.go:334] "Generic (PLEG): container finished" podID="9c24335e-75be-481e-b1c8-631913d074ee" containerID="bd25a0bbaec5054f7454612c57bed5e09bb1f26fa1edc54363bf9ae7af5130e5" exitCode=143 Feb 26 11:31:23 crc kubenswrapper[4699]: I0226 11:31:23.052279 4699 generic.go:334] "Generic (PLEG): container finished" podID="9c24335e-75be-481e-b1c8-631913d074ee" containerID="fcc40c7508a6a00f53ef699bf82940d37acb3bc8e8309bb9b5ea1335e70a77f3" exitCode=143 Feb 26 11:31:23 crc kubenswrapper[4699]: I0226 11:31:23.052323 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c24335e-75be-481e-b1c8-631913d074ee","Type":"ContainerDied","Data":"bd25a0bbaec5054f7454612c57bed5e09bb1f26fa1edc54363bf9ae7af5130e5"} Feb 26 11:31:23 crc kubenswrapper[4699]: I0226 11:31:23.052353 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c24335e-75be-481e-b1c8-631913d074ee","Type":"ContainerDied","Data":"fcc40c7508a6a00f53ef699bf82940d37acb3bc8e8309bb9b5ea1335e70a77f3"} Feb 26 11:31:24 crc kubenswrapper[4699]: I0226 11:31:24.072382 4699 generic.go:334] "Generic (PLEG): container finished" podID="833927c0-710f-446e-a3be-0df2b2399638" containerID="5822866374c533954891aab83b4e82e6518ecfafe343985ba49ddc3abdfd00dc" exitCode=0 Feb 26 11:31:24 crc kubenswrapper[4699]: I0226 11:31:24.072430 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rx6w7" event={"ID":"833927c0-710f-446e-a3be-0df2b2399638","Type":"ContainerDied","Data":"5822866374c533954891aab83b4e82e6518ecfafe343985ba49ddc3abdfd00dc"} Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.667076 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.678592 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.689602 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746617 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-logs\") pod \"630dc7fb-8bb5-4136-accd-eb460ad0e940\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746666 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-scripts\") pod \"9c24335e-75be-481e-b1c8-631913d074ee\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746684 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-fernet-keys\") pod \"833927c0-710f-446e-a3be-0df2b2399638\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746724 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-httpd-run\") pod \"9c24335e-75be-481e-b1c8-631913d074ee\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746793 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-credential-keys\") pod \"833927c0-710f-446e-a3be-0df2b2399638\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746812 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"9c24335e-75be-481e-b1c8-631913d074ee\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746833 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-logs\") pod \"9c24335e-75be-481e-b1c8-631913d074ee\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746848 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-combined-ca-bundle\") pod \"9c24335e-75be-481e-b1c8-631913d074ee\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746866 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-scripts\") pod \"630dc7fb-8bb5-4136-accd-eb460ad0e940\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746882 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xwxg\" (UniqueName: \"kubernetes.io/projected/9c24335e-75be-481e-b1c8-631913d074ee-kube-api-access-6xwxg\") pod \"9c24335e-75be-481e-b1c8-631913d074ee\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746900 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-config-data\") pod \"833927c0-710f-446e-a3be-0df2b2399638\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746921 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-combined-ca-bundle\") pod \"833927c0-710f-446e-a3be-0df2b2399638\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746936 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"630dc7fb-8bb5-4136-accd-eb460ad0e940\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.746953 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w8wd\" (UniqueName: \"kubernetes.io/projected/833927c0-710f-446e-a3be-0df2b2399638-kube-api-access-9w8wd\") pod \"833927c0-710f-446e-a3be-0df2b2399638\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.747032 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-config-data\") pod \"9c24335e-75be-481e-b1c8-631913d074ee\" (UID: \"9c24335e-75be-481e-b1c8-631913d074ee\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.747076 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztqww\" (UniqueName: \"kubernetes.io/projected/630dc7fb-8bb5-4136-accd-eb460ad0e940-kube-api-access-ztqww\") pod \"630dc7fb-8bb5-4136-accd-eb460ad0e940\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.747104 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-scripts\") pod \"833927c0-710f-446e-a3be-0df2b2399638\" (UID: \"833927c0-710f-446e-a3be-0df2b2399638\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.747139 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-config-data\") pod \"630dc7fb-8bb5-4136-accd-eb460ad0e940\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.747162 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-httpd-run\") pod \"630dc7fb-8bb5-4136-accd-eb460ad0e940\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.747195 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-combined-ca-bundle\") pod \"630dc7fb-8bb5-4136-accd-eb460ad0e940\" (UID: \"630dc7fb-8bb5-4136-accd-eb460ad0e940\") " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.749924 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-logs" (OuterVolumeSpecName: "logs") pod "9c24335e-75be-481e-b1c8-631913d074ee" (UID: "9c24335e-75be-481e-b1c8-631913d074ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.751348 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-logs" (OuterVolumeSpecName: "logs") pod "630dc7fb-8bb5-4136-accd-eb460ad0e940" (UID: "630dc7fb-8bb5-4136-accd-eb460ad0e940"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.754561 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c24335e-75be-481e-b1c8-631913d074ee-kube-api-access-6xwxg" (OuterVolumeSpecName: "kube-api-access-6xwxg") pod "9c24335e-75be-481e-b1c8-631913d074ee" (UID: "9c24335e-75be-481e-b1c8-631913d074ee"). InnerVolumeSpecName "kube-api-access-6xwxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.756279 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-scripts" (OuterVolumeSpecName: "scripts") pod "833927c0-710f-446e-a3be-0df2b2399638" (UID: "833927c0-710f-446e-a3be-0df2b2399638"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.757020 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-scripts" (OuterVolumeSpecName: "scripts") pod "9c24335e-75be-481e-b1c8-631913d074ee" (UID: "9c24335e-75be-481e-b1c8-631913d074ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.758908 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "833927c0-710f-446e-a3be-0df2b2399638" (UID: "833927c0-710f-446e-a3be-0df2b2399638"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.759437 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "9c24335e-75be-481e-b1c8-631913d074ee" (UID: "9c24335e-75be-481e-b1c8-631913d074ee"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.759825 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "833927c0-710f-446e-a3be-0df2b2399638" (UID: "833927c0-710f-446e-a3be-0df2b2399638"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.760317 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "630dc7fb-8bb5-4136-accd-eb460ad0e940" (UID: "630dc7fb-8bb5-4136-accd-eb460ad0e940"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.760637 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9c24335e-75be-481e-b1c8-631913d074ee" (UID: "9c24335e-75be-481e-b1c8-631913d074ee"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.760671 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "630dc7fb-8bb5-4136-accd-eb460ad0e940" (UID: "630dc7fb-8bb5-4136-accd-eb460ad0e940"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.761706 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-scripts" (OuterVolumeSpecName: "scripts") pod "630dc7fb-8bb5-4136-accd-eb460ad0e940" (UID: "630dc7fb-8bb5-4136-accd-eb460ad0e940"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.761986 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/630dc7fb-8bb5-4136-accd-eb460ad0e940-kube-api-access-ztqww" (OuterVolumeSpecName: "kube-api-access-ztqww") pod "630dc7fb-8bb5-4136-accd-eb460ad0e940" (UID: "630dc7fb-8bb5-4136-accd-eb460ad0e940"). InnerVolumeSpecName "kube-api-access-ztqww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.772778 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833927c0-710f-446e-a3be-0df2b2399638-kube-api-access-9w8wd" (OuterVolumeSpecName: "kube-api-access-9w8wd") pod "833927c0-710f-446e-a3be-0df2b2399638" (UID: "833927c0-710f-446e-a3be-0df2b2399638"). InnerVolumeSpecName "kube-api-access-9w8wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.800150 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "833927c0-710f-446e-a3be-0df2b2399638" (UID: "833927c0-710f-446e-a3be-0df2b2399638"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.809247 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "630dc7fb-8bb5-4136-accd-eb460ad0e940" (UID: "630dc7fb-8bb5-4136-accd-eb460ad0e940"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.814922 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-config-data" (OuterVolumeSpecName: "config-data") pod "833927c0-710f-446e-a3be-0df2b2399638" (UID: "833927c0-710f-446e-a3be-0df2b2399638"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.822459 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c24335e-75be-481e-b1c8-631913d074ee" (UID: "9c24335e-75be-481e-b1c8-631913d074ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.823490 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-config-data" (OuterVolumeSpecName: "config-data") pod "9c24335e-75be-481e-b1c8-631913d074ee" (UID: "9c24335e-75be-481e-b1c8-631913d074ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.841104 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-config-data" (OuterVolumeSpecName: "config-data") pod "630dc7fb-8bb5-4136-accd-eb460ad0e940" (UID: "630dc7fb-8bb5-4136-accd-eb460ad0e940"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.852244 4699 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.852480 4699 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.852541 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.852633 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.852694 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.852756 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xwxg\" (UniqueName: \"kubernetes.io/projected/9c24335e-75be-481e-b1c8-631913d074ee-kube-api-access-6xwxg\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.852814 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.852870 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.852937 4699 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.852996 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w8wd\" (UniqueName: \"kubernetes.io/projected/833927c0-710f-446e-a3be-0df2b2399638-kube-api-access-9w8wd\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.853054 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.853128 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztqww\" (UniqueName: \"kubernetes.io/projected/630dc7fb-8bb5-4136-accd-eb460ad0e940-kube-api-access-ztqww\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.853211 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.853282 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.853339 4699 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.853407 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630dc7fb-8bb5-4136-accd-eb460ad0e940-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.853463 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630dc7fb-8bb5-4136-accd-eb460ad0e940-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.853531 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c24335e-75be-481e-b1c8-631913d074ee-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.853592 4699 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/833927c0-710f-446e-a3be-0df2b2399638-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.853648 4699 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c24335e-75be-481e-b1c8-631913d074ee-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.873586 4699 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.879380 4699 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.955431 4699 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:25 crc kubenswrapper[4699]: I0226 11:31:25.955470 4699 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.093339 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.093363 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c24335e-75be-481e-b1c8-631913d074ee","Type":"ContainerDied","Data":"3505d7f98aca89b6416ea19c75cdca4f118df015fab7cba4bb6ff9fd01c39fa6"} Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.093417 4699 scope.go:117] "RemoveContainer" containerID="bd25a0bbaec5054f7454612c57bed5e09bb1f26fa1edc54363bf9ae7af5130e5" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.096643 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rx6w7" event={"ID":"833927c0-710f-446e-a3be-0df2b2399638","Type":"ContainerDied","Data":"654ff57a8dfa79f3d0a638f1486da08c6ebdf1d037f74a27a5ccdbee84215eb5"} Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.096679 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="654ff57a8dfa79f3d0a638f1486da08c6ebdf1d037f74a27a5ccdbee84215eb5" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.096733 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rx6w7" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.101846 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.101780 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"630dc7fb-8bb5-4136-accd-eb460ad0e940","Type":"ContainerDied","Data":"0462d99185a120468341d7f6efeca5ca1d1c779c506ddb0fb105a2de0f655ad5"} Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.160676 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.170255 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.183493 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.198474 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.204647 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:26 crc kubenswrapper[4699]: E0226 11:31:26.205052 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="630dc7fb-8bb5-4136-accd-eb460ad0e940" containerName="glance-log" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.205074 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="630dc7fb-8bb5-4136-accd-eb460ad0e940" containerName="glance-log" Feb 26 11:31:26 crc kubenswrapper[4699]: E0226 11:31:26.205094 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833927c0-710f-446e-a3be-0df2b2399638" containerName="keystone-bootstrap" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.205101 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="833927c0-710f-446e-a3be-0df2b2399638" containerName="keystone-bootstrap" Feb 26 11:31:26 crc kubenswrapper[4699]: E0226 11:31:26.205124 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c24335e-75be-481e-b1c8-631913d074ee" containerName="glance-httpd" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.205135 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c24335e-75be-481e-b1c8-631913d074ee" containerName="glance-httpd" Feb 26 11:31:26 crc kubenswrapper[4699]: E0226 11:31:26.205148 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c24335e-75be-481e-b1c8-631913d074ee" containerName="glance-log" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.205154 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c24335e-75be-481e-b1c8-631913d074ee" containerName="glance-log" Feb 26 11:31:26 crc kubenswrapper[4699]: E0226 11:31:26.205175 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="630dc7fb-8bb5-4136-accd-eb460ad0e940" containerName="glance-httpd" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.205182 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="630dc7fb-8bb5-4136-accd-eb460ad0e940" containerName="glance-httpd" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.205368 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="630dc7fb-8bb5-4136-accd-eb460ad0e940" containerName="glance-log" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.205384 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="833927c0-710f-446e-a3be-0df2b2399638" containerName="keystone-bootstrap" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.205399 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="630dc7fb-8bb5-4136-accd-eb460ad0e940" containerName="glance-httpd" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.205411 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c24335e-75be-481e-b1c8-631913d074ee" containerName="glance-log" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.205419 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c24335e-75be-481e-b1c8-631913d074ee" containerName="glance-httpd" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.206344 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.210315 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j4q6c" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.211697 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.211932 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.214480 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rx6w7"] Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.228415 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.233095 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.235006 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.239748 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.260182 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.260514 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.260543 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.260572 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.260655 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-698q6\" (UniqueName: \"kubernetes.io/projected/1c856fe4-2ae4-4e5d-8112-a367658a5082-kube-api-access-698q6\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.260728 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.260822 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.300199 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="630dc7fb-8bb5-4136-accd-eb460ad0e940" path="/var/lib/kubelet/pods/630dc7fb-8bb5-4136-accd-eb460ad0e940/volumes" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.301084 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c24335e-75be-481e-b1c8-631913d074ee" path="/var/lib/kubelet/pods/9c24335e-75be-481e-b1c8-631913d074ee/volumes" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.301783 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rx6w7"] Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.301814 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.308566 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-28v5g"] Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.309658 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.312695 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.312803 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.312865 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.313046 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbntt" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.314810 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.323988 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-28v5g"] Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.362343 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-config-data\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.362388 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-fernet-keys\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.362410 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.362435 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.362696 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb644\" (UniqueName: \"kubernetes.io/projected/6fcef8c4-762f-45c5-9087-fdfd43cd166f-kube-api-access-wb644\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.362800 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.362963 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89dst\" (UniqueName: \"kubernetes.io/projected/b33c7b6e-a78a-4a10-848c-a65d01deee0b-kube-api-access-89dst\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363005 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-credential-keys\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363151 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363181 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-logs\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363249 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363310 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363340 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-combined-ca-bundle\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363448 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-scripts\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363480 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363551 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363611 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363632 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363686 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.363826 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-698q6\" (UniqueName: \"kubernetes.io/projected/1c856fe4-2ae4-4e5d-8112-a367658a5082-kube-api-access-698q6\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.365897 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.366355 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.366806 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.369475 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.370263 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.375285 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.390660 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-698q6\" (UniqueName: \"kubernetes.io/projected/1c856fe4-2ae4-4e5d-8112-a367658a5082-kube-api-access-698q6\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.406417 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467235 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-config-data\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467285 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-fernet-keys\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467311 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467344 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467371 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb644\" (UniqueName: \"kubernetes.io/projected/6fcef8c4-762f-45c5-9087-fdfd43cd166f-kube-api-access-wb644\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467400 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89dst\" (UniqueName: \"kubernetes.io/projected/b33c7b6e-a78a-4a10-848c-a65d01deee0b-kube-api-access-89dst\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467424 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-credential-keys\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467466 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467490 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-logs\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467533 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-combined-ca-bundle\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467568 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-scripts\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467589 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.467661 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.468812 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-logs\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.471543 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.472344 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-credential-keys\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.472701 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.472920 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.473822 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-config-data\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.475501 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-scripts\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.476084 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-combined-ca-bundle\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.489848 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.489882 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.489996 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-fernet-keys\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.492556 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89dst\" (UniqueName: \"kubernetes.io/projected/b33c7b6e-a78a-4a10-848c-a65d01deee0b-kube-api-access-89dst\") pod \"keystone-bootstrap-28v5g\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.493560 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb644\" (UniqueName: \"kubernetes.io/projected/6fcef8c4-762f-45c5-9087-fdfd43cd166f-kube-api-access-wb644\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.502008 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.543586 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.589166 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:31:26 crc kubenswrapper[4699]: I0226 11:31:26.634956 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.424346 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.497869 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-c6zn7"] Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.498164 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" podUID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerName="dnsmasq-dns" containerID="cri-o://fe976bbefde2fa99a8167c39df0e86003afc4a567d5a020335a060d2c650e894" gracePeriod=10 Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.693535 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-db87b77d9-ns48f"] Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.742012 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.749421 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57899c756d-w9pc5"] Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.750943 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.755605 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.763955 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57899c756d-w9pc5"] Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.801183 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-secret-key\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.801244 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-scripts\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.801299 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-252gw\" (UniqueName: \"kubernetes.io/projected/78d85906-b78a-46eb-b5dd-4da95c1222d8-kube-api-access-252gw\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.801406 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-tls-certs\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.801499 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d85906-b78a-46eb-b5dd-4da95c1222d8-logs\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.801522 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-config-data\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.801556 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-combined-ca-bundle\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.833921 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d649b895f-2cm8f"] Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.866171 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.878755 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5795557cd8-dvzqq"] Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.880797 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.898257 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5795557cd8-dvzqq"] Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.902942 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-combined-ca-bundle\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.902988 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-combined-ca-bundle\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903017 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-secret-key\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903044 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-scripts\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903077 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-252gw\" (UniqueName: \"kubernetes.io/projected/78d85906-b78a-46eb-b5dd-4da95c1222d8-kube-api-access-252gw\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903153 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-horizon-tls-certs\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903172 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-horizon-secret-key\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903192 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-scripts\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903230 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-tls-certs\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903252 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-logs\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903294 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-config-data\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903321 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thkww\" (UniqueName: \"kubernetes.io/projected/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-kube-api-access-thkww\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903343 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d85906-b78a-46eb-b5dd-4da95c1222d8-logs\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.903359 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-config-data\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.904551 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-config-data\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.909033 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-tls-certs\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.909524 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d85906-b78a-46eb-b5dd-4da95c1222d8-logs\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.911362 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-scripts\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.914530 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-combined-ca-bundle\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.918626 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-secret-key\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:27 crc kubenswrapper[4699]: I0226 11:31:27.943573 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-252gw\" (UniqueName: \"kubernetes.io/projected/78d85906-b78a-46eb-b5dd-4da95c1222d8-kube-api-access-252gw\") pod \"horizon-57899c756d-w9pc5\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.005210 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-horizon-tls-certs\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.005251 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-horizon-secret-key\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.005272 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-scripts\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.005296 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-logs\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.005339 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-config-data\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.005372 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thkww\" (UniqueName: \"kubernetes.io/projected/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-kube-api-access-thkww\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.005399 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-combined-ca-bundle\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.006640 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-logs\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.006700 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-scripts\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.007950 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-config-data\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.009874 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-combined-ca-bundle\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.010689 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-horizon-tls-certs\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.026605 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thkww\" (UniqueName: \"kubernetes.io/projected/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-kube-api-access-thkww\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.031840 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0-horizon-secret-key\") pod \"horizon-5795557cd8-dvzqq\" (UID: \"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0\") " pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.122525 4699 generic.go:334] "Generic (PLEG): container finished" podID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerID="fe976bbefde2fa99a8167c39df0e86003afc4a567d5a020335a060d2c650e894" exitCode=0 Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.122570 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" event={"ID":"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9","Type":"ContainerDied","Data":"fe976bbefde2fa99a8167c39df0e86003afc4a567d5a020335a060d2c650e894"} Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.130405 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.211719 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.270603 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="833927c0-710f-446e-a3be-0df2b2399638" path="/var/lib/kubelet/pods/833927c0-710f-446e-a3be-0df2b2399638/volumes" Feb 26 11:31:28 crc kubenswrapper[4699]: I0226 11:31:28.977015 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" podUID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 26 11:31:33 crc kubenswrapper[4699]: I0226 11:31:33.976745 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" podUID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 26 11:31:35 crc kubenswrapper[4699]: E0226 11:31:35.764125 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 26 11:31:35 crc kubenswrapper[4699]: E0226 11:31:35.764559 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n564h569h585h556h5c5h677h95h567h95h59dh59h56dh654h646hdchd4hd5h5cdh88h666hfbh665h5bdh5fbh5ffhd5h5c5h76hf6h57chddh68cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8k4xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-db87b77d9-ns48f_openstack(c9cf42d8-ed15-44dd-aaed-fbffa29417c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:31:35 crc kubenswrapper[4699]: E0226 11:31:35.767091 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-db87b77d9-ns48f" podUID="c9cf42d8-ed15-44dd-aaed-fbffa29417c4" Feb 26 11:31:37 crc kubenswrapper[4699]: E0226 11:31:37.141248 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 26 11:31:37 crc kubenswrapper[4699]: E0226 11:31:37.141698 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99w54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-z6w9z_openstack(47a9d008-5b7e-4866-b92b-efcb60cbfdb0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:31:37 crc kubenswrapper[4699]: E0226 11:31:37.143081 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-z6w9z" podUID="47a9d008-5b7e-4866-b92b-efcb60cbfdb0" Feb 26 11:31:37 crc kubenswrapper[4699]: E0226 11:31:37.157711 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 26 11:31:37 crc kubenswrapper[4699]: E0226 11:31:37.157882 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58bh588h58hf9h674h68bh66ch98h668h55hf5h59dh8fh5ffhb5h6dh596h5c4h8ch8fh657hc7h68fh59bh58h64fhf7h66dh657h86h9fh668q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n82h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7f6f7dcd75-m9jm6_openstack(41ed545b-f613-4408-bd1c-df5a09432e39): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:31:37 crc kubenswrapper[4699]: E0226 11:31:37.160378 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7f6f7dcd75-m9jm6" podUID="41ed545b-f613-4408-bd1c-df5a09432e39" Feb 26 11:31:37 crc kubenswrapper[4699]: E0226 11:31:37.180939 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 26 11:31:37 crc kubenswrapper[4699]: E0226 11:31:37.181089 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55bh57dh5b4h99hb7h587h5cch687h678hcdh699h58ch666h5bhdch559h65dh66fh99h698h5f7h656h54dh58ch65h666h679h5bfhc7h6fh5ddh679q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfkrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6d649b895f-2cm8f_openstack(d6628395-d6a6-4719-b0ad-10984c3c172b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:31:37 crc kubenswrapper[4699]: E0226 11:31:37.183619 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6d649b895f-2cm8f" podUID="d6628395-d6a6-4719-b0ad-10984c3c172b" Feb 26 11:31:37 crc kubenswrapper[4699]: E0226 11:31:37.193344 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-z6w9z" podUID="47a9d008-5b7e-4866-b92b-efcb60cbfdb0" Feb 26 11:31:40 crc kubenswrapper[4699]: I0226 11:31:40.213355 4699 generic.go:334] "Generic (PLEG): container finished" podID="ae813248-510e-4b19-bcd8-39cefca6cd37" containerID="0eab0de6a835999edb566f7a018ef04e992296918bfb17f761cbea8ef8c3775a" exitCode=0 Feb 26 11:31:40 crc kubenswrapper[4699]: I0226 11:31:40.213502 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dr78q" event={"ID":"ae813248-510e-4b19-bcd8-39cefca6cd37","Type":"ContainerDied","Data":"0eab0de6a835999edb566f7a018ef04e992296918bfb17f761cbea8ef8c3775a"} Feb 26 11:31:43 crc kubenswrapper[4699]: I0226 11:31:43.978197 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" podUID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 26 11:31:43 crc kubenswrapper[4699]: I0226 11:31:43.978957 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:31:45 crc kubenswrapper[4699]: E0226 11:31:45.632644 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 26 11:31:45 crc kubenswrapper[4699]: E0226 11:31:45.633132 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7h67bh688h56bhd8hd5h5h58fh95h5b5hb4h5b6h564h658h6fh58fh5ch5dfh556hfch657h678h694h67fh566h98h5b7h64h576hc4h5bch695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srl4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7cec2d73-9ca8-4a8b-836d-efce961fbde8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.692620 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.724191 4699 scope.go:117] "RemoveContainer" containerID="fcc40c7508a6a00f53ef699bf82940d37acb3bc8e8309bb9b5ea1335e70a77f3" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.742484 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-config-data\") pod \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.742635 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-scripts\") pod \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.742671 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-logs\") pod \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.742774 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k4xk\" (UniqueName: \"kubernetes.io/projected/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-kube-api-access-8k4xk\") pod \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.742863 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-horizon-secret-key\") pod \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\" (UID: \"c9cf42d8-ed15-44dd-aaed-fbffa29417c4\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.743375 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-logs" (OuterVolumeSpecName: "logs") pod "c9cf42d8-ed15-44dd-aaed-fbffa29417c4" (UID: "c9cf42d8-ed15-44dd-aaed-fbffa29417c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.743631 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-config-data" (OuterVolumeSpecName: "config-data") pod "c9cf42d8-ed15-44dd-aaed-fbffa29417c4" (UID: "c9cf42d8-ed15-44dd-aaed-fbffa29417c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.743666 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-scripts" (OuterVolumeSpecName: "scripts") pod "c9cf42d8-ed15-44dd-aaed-fbffa29417c4" (UID: "c9cf42d8-ed15-44dd-aaed-fbffa29417c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.747068 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-kube-api-access-8k4xk" (OuterVolumeSpecName: "kube-api-access-8k4xk") pod "c9cf42d8-ed15-44dd-aaed-fbffa29417c4" (UID: "c9cf42d8-ed15-44dd-aaed-fbffa29417c4"). InnerVolumeSpecName "kube-api-access-8k4xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.747065 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c9cf42d8-ed15-44dd-aaed-fbffa29417c4" (UID: "c9cf42d8-ed15-44dd-aaed-fbffa29417c4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.844849 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k4xk\" (UniqueName: \"kubernetes.io/projected/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-kube-api-access-8k4xk\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.844884 4699 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.844894 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.844907 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.844919 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9cf42d8-ed15-44dd-aaed-fbffa29417c4-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.846446 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.853416 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.865451 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.878945 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.945814 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-svc\") pod \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.945877 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41ed545b-f613-4408-bd1c-df5a09432e39-horizon-secret-key\") pod \"41ed545b-f613-4408-bd1c-df5a09432e39\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.945919 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-nb\") pod \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.945975 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrxfq\" (UniqueName: \"kubernetes.io/projected/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-kube-api-access-lrxfq\") pod \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946014 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-sb\") pod \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946038 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-combined-ca-bundle\") pod \"ae813248-510e-4b19-bcd8-39cefca6cd37\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946062 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n82h\" (UniqueName: \"kubernetes.io/projected/41ed545b-f613-4408-bd1c-df5a09432e39-kube-api-access-7n82h\") pod \"41ed545b-f613-4408-bd1c-df5a09432e39\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946138 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqk94\" (UniqueName: \"kubernetes.io/projected/ae813248-510e-4b19-bcd8-39cefca6cd37-kube-api-access-pqk94\") pod \"ae813248-510e-4b19-bcd8-39cefca6cd37\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946174 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-scripts\") pod \"d6628395-d6a6-4719-b0ad-10984c3c172b\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946270 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ed545b-f613-4408-bd1c-df5a09432e39-logs\") pod \"41ed545b-f613-4408-bd1c-df5a09432e39\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946301 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-swift-storage-0\") pod \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946328 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-config\") pod \"ae813248-510e-4b19-bcd8-39cefca6cd37\" (UID: \"ae813248-510e-4b19-bcd8-39cefca6cd37\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946366 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfkrb\" (UniqueName: \"kubernetes.io/projected/d6628395-d6a6-4719-b0ad-10984c3c172b-kube-api-access-tfkrb\") pod \"d6628395-d6a6-4719-b0ad-10984c3c172b\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946408 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-config-data\") pod \"41ed545b-f613-4408-bd1c-df5a09432e39\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946441 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-config-data\") pod \"d6628395-d6a6-4719-b0ad-10984c3c172b\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946465 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-config\") pod \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\" (UID: \"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946489 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6628395-d6a6-4719-b0ad-10984c3c172b-horizon-secret-key\") pod \"d6628395-d6a6-4719-b0ad-10984c3c172b\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946522 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6628395-d6a6-4719-b0ad-10984c3c172b-logs\") pod \"d6628395-d6a6-4719-b0ad-10984c3c172b\" (UID: \"d6628395-d6a6-4719-b0ad-10984c3c172b\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.946558 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-scripts\") pod \"41ed545b-f613-4408-bd1c-df5a09432e39\" (UID: \"41ed545b-f613-4408-bd1c-df5a09432e39\") " Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.947652 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-scripts" (OuterVolumeSpecName: "scripts") pod "41ed545b-f613-4408-bd1c-df5a09432e39" (UID: "41ed545b-f613-4408-bd1c-df5a09432e39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.947699 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-config-data" (OuterVolumeSpecName: "config-data") pod "d6628395-d6a6-4719-b0ad-10984c3c172b" (UID: "d6628395-d6a6-4719-b0ad-10984c3c172b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.954341 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-scripts" (OuterVolumeSpecName: "scripts") pod "d6628395-d6a6-4719-b0ad-10984c3c172b" (UID: "d6628395-d6a6-4719-b0ad-10984c3c172b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.954594 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae813248-510e-4b19-bcd8-39cefca6cd37-kube-api-access-pqk94" (OuterVolumeSpecName: "kube-api-access-pqk94") pod "ae813248-510e-4b19-bcd8-39cefca6cd37" (UID: "ae813248-510e-4b19-bcd8-39cefca6cd37"). InnerVolumeSpecName "kube-api-access-pqk94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.956073 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ed545b-f613-4408-bd1c-df5a09432e39-kube-api-access-7n82h" (OuterVolumeSpecName: "kube-api-access-7n82h") pod "41ed545b-f613-4408-bd1c-df5a09432e39" (UID: "41ed545b-f613-4408-bd1c-df5a09432e39"). InnerVolumeSpecName "kube-api-access-7n82h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.956454 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-kube-api-access-lrxfq" (OuterVolumeSpecName: "kube-api-access-lrxfq") pod "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" (UID: "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9"). InnerVolumeSpecName "kube-api-access-lrxfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.956460 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6628395-d6a6-4719-b0ad-10984c3c172b-logs" (OuterVolumeSpecName: "logs") pod "d6628395-d6a6-4719-b0ad-10984c3c172b" (UID: "d6628395-d6a6-4719-b0ad-10984c3c172b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.956916 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ed545b-f613-4408-bd1c-df5a09432e39-logs" (OuterVolumeSpecName: "logs") pod "41ed545b-f613-4408-bd1c-df5a09432e39" (UID: "41ed545b-f613-4408-bd1c-df5a09432e39"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.957147 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-config-data" (OuterVolumeSpecName: "config-data") pod "41ed545b-f613-4408-bd1c-df5a09432e39" (UID: "41ed545b-f613-4408-bd1c-df5a09432e39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.960051 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ed545b-f613-4408-bd1c-df5a09432e39-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "41ed545b-f613-4408-bd1c-df5a09432e39" (UID: "41ed545b-f613-4408-bd1c-df5a09432e39"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.962337 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6628395-d6a6-4719-b0ad-10984c3c172b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d6628395-d6a6-4719-b0ad-10984c3c172b" (UID: "d6628395-d6a6-4719-b0ad-10984c3c172b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.962814 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6628395-d6a6-4719-b0ad-10984c3c172b-kube-api-access-tfkrb" (OuterVolumeSpecName: "kube-api-access-tfkrb") pod "d6628395-d6a6-4719-b0ad-10984c3c172b" (UID: "d6628395-d6a6-4719-b0ad-10984c3c172b"). InnerVolumeSpecName "kube-api-access-tfkrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.981700 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae813248-510e-4b19-bcd8-39cefca6cd37" (UID: "ae813248-510e-4b19-bcd8-39cefca6cd37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4699]: I0226 11:31:45.984281 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-config" (OuterVolumeSpecName: "config") pod "ae813248-510e-4b19-bcd8-39cefca6cd37" (UID: "ae813248-510e-4b19-bcd8-39cefca6cd37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.003818 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" (UID: "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.003839 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-config" (OuterVolumeSpecName: "config") pod "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" (UID: "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.004804 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" (UID: "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.004905 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" (UID: "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.005961 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" (UID: "6ebd52ff-bacc-40c7-afc5-83df5a1c36e9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048507 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6628395-d6a6-4719-b0ad-10984c3c172b-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048545 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048559 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048573 4699 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/41ed545b-f613-4408-bd1c-df5a09432e39-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048589 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048599 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrxfq\" (UniqueName: \"kubernetes.io/projected/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-kube-api-access-lrxfq\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048610 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048623 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048633 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n82h\" (UniqueName: \"kubernetes.io/projected/41ed545b-f613-4408-bd1c-df5a09432e39-kube-api-access-7n82h\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048642 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqk94\" (UniqueName: \"kubernetes.io/projected/ae813248-510e-4b19-bcd8-39cefca6cd37-kube-api-access-pqk94\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048652 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048661 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ed545b-f613-4408-bd1c-df5a09432e39-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048670 4699 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048680 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae813248-510e-4b19-bcd8-39cefca6cd37-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048690 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfkrb\" (UniqueName: \"kubernetes.io/projected/d6628395-d6a6-4719-b0ad-10984c3c172b-kube-api-access-tfkrb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048700 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41ed545b-f613-4408-bd1c-df5a09432e39-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048712 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6628395-d6a6-4719-b0ad-10984c3c172b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048722 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.048731 4699 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6628395-d6a6-4719-b0ad-10984c3c172b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.265865 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db87b77d9-ns48f" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.265887 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f6f7dcd75-m9jm6" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.265922 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.266302 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d649b895f-2cm8f" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.269618 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dr78q" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.283816 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-db87b77d9-ns48f" event={"ID":"c9cf42d8-ed15-44dd-aaed-fbffa29417c4","Type":"ContainerDied","Data":"1e699d538cdebfe589c475a344848f907766835460184b9e330ed614ebb6483c"} Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.283861 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f6f7dcd75-m9jm6" event={"ID":"41ed545b-f613-4408-bd1c-df5a09432e39","Type":"ContainerDied","Data":"b1308b571f0b2b92fb651e80c640ed1db7c81e3d85041ae47619d6dae7c87aad"} Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.283877 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" event={"ID":"6ebd52ff-bacc-40c7-afc5-83df5a1c36e9","Type":"ContainerDied","Data":"db12e6ab7e70b99da81ac4834b205007d7df170db9b9e0a8bd4ab5007bbb10d9"} Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.283894 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d649b895f-2cm8f" event={"ID":"d6628395-d6a6-4719-b0ad-10984c3c172b","Type":"ContainerDied","Data":"cf033b29d548f1e02ed1b1bad110c9d77ffdf16f842c34bb4ebc18230fbed6bf"} Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.283908 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dr78q" event={"ID":"ae813248-510e-4b19-bcd8-39cefca6cd37","Type":"ContainerDied","Data":"41ed63c8f69999d16d3b8a0632b0099f90cd743a1b305a70c63928dff741248e"} Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.283922 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41ed63c8f69999d16d3b8a0632b0099f90cd743a1b305a70c63928dff741248e" Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.371420 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f6f7dcd75-m9jm6"] Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.386534 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f6f7dcd75-m9jm6"] Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.411973 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d649b895f-2cm8f"] Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.432972 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d649b895f-2cm8f"] Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.446520 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-db87b77d9-ns48f"] Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.453023 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-db87b77d9-ns48f"] Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.459098 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-c6zn7"] Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.465669 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-c6zn7"] Feb 26 11:31:46 crc kubenswrapper[4699]: I0226 11:31:46.984309 4699 scope.go:117] "RemoveContainer" containerID="1784112af5cd06d4ca4320e949f04a24c30b53553bdf95678319674202498461" Feb 26 11:31:47 crc kubenswrapper[4699]: E0226 11:31:47.078476 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 26 11:31:47 crc kubenswrapper[4699]: E0226 11:31:47.079054 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr9sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-f49xd_openstack(8426fd89-9eba-46fa-8611-e98cc7636b41): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:31:47 crc kubenswrapper[4699]: E0226 11:31:47.080199 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-f49xd" podUID="8426fd89-9eba-46fa-8611-e98cc7636b41" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.137259 4699 scope.go:117] "RemoveContainer" containerID="790e1bee9a89611157009e82024f95d4afa15834cb397b52f1c9c892d7cb8150" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.153903 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-x6m79"] Feb 26 11:31:47 crc kubenswrapper[4699]: E0226 11:31:47.154328 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae813248-510e-4b19-bcd8-39cefca6cd37" containerName="neutron-db-sync" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.154342 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae813248-510e-4b19-bcd8-39cefca6cd37" containerName="neutron-db-sync" Feb 26 11:31:47 crc kubenswrapper[4699]: E0226 11:31:47.154368 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerName="dnsmasq-dns" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.154374 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerName="dnsmasq-dns" Feb 26 11:31:47 crc kubenswrapper[4699]: E0226 11:31:47.154394 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerName="init" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.154400 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerName="init" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.154587 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae813248-510e-4b19-bcd8-39cefca6cd37" containerName="neutron-db-sync" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.154606 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerName="dnsmasq-dns" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.159829 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.178074 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-x6m79"] Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.194375 4699 scope.go:117] "RemoveContainer" containerID="fe976bbefde2fa99a8167c39df0e86003afc4a567d5a020335a060d2c650e894" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.252990 4699 scope.go:117] "RemoveContainer" containerID="9682f0a3316099cd400015d1d5abe7c7f75f2f43640ff21520a7cddc2ba23260" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.272563 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vznpb\" (UniqueName: \"kubernetes.io/projected/a1a11bcd-db42-43bf-86ca-90fafb25674e-kube-api-access-vznpb\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.272951 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.272973 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.273004 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.273079 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.273165 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-config\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.333223 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59dd795c56-7kv72"] Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.339062 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.342003 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.342454 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xrfkn" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.342525 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.343360 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 26 11:31:47 crc kubenswrapper[4699]: E0226 11:31:47.347031 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-f49xd" podUID="8426fd89-9eba-46fa-8611-e98cc7636b41" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.371170 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59dd795c56-7kv72"] Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.376524 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-config\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.376778 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vznpb\" (UniqueName: \"kubernetes.io/projected/a1a11bcd-db42-43bf-86ca-90fafb25674e-kube-api-access-vznpb\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.376801 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.376819 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.376846 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.376902 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.377913 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.378005 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.378159 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-config\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.378951 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.380812 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.390805 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57899c756d-w9pc5"] Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.406816 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vznpb\" (UniqueName: \"kubernetes.io/projected/a1a11bcd-db42-43bf-86ca-90fafb25674e-kube-api-access-vznpb\") pod \"dnsmasq-dns-5ccc5c4795-x6m79\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.478010 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-combined-ca-bundle\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.478055 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-config\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.478092 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st8wb\" (UniqueName: \"kubernetes.io/projected/715a80f0-cdba-439c-8a82-4838bf8f7e50-kube-api-access-st8wb\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.478148 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-ovndb-tls-certs\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.478220 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-httpd-config\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.566851 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.581126 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-combined-ca-bundle\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.581174 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-config\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.581206 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st8wb\" (UniqueName: \"kubernetes.io/projected/715a80f0-cdba-439c-8a82-4838bf8f7e50-kube-api-access-st8wb\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.581237 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-ovndb-tls-certs\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.581290 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-httpd-config\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.587784 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-ovndb-tls-certs\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.590451 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-config\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.593360 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-combined-ca-bundle\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.602226 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-httpd-config\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.606075 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st8wb\" (UniqueName: \"kubernetes.io/projected/715a80f0-cdba-439c-8a82-4838bf8f7e50-kube-api-access-st8wb\") pod \"neutron-59dd795c56-7kv72\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.655010 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5795557cd8-dvzqq"] Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.682996 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.756043 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-28v5g"] Feb 26 11:31:47 crc kubenswrapper[4699]: I0226 11:31:47.892081 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:47 crc kubenswrapper[4699]: W0226 11:31:47.954393 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fcef8c4_762f_45c5_9087_fdfd43cd166f.slice/crio-d882d7425cfad82c938bd8e161f347bff453aefcb2f7ce55cc7f9962a1234809 WatchSource:0}: Error finding container d882d7425cfad82c938bd8e161f347bff453aefcb2f7ce55cc7f9962a1234809: Status 404 returned error can't find the container with id d882d7425cfad82c938bd8e161f347bff453aefcb2f7ce55cc7f9962a1234809 Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.236042 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-x6m79"] Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.273197 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ed545b-f613-4408-bd1c-df5a09432e39" path="/var/lib/kubelet/pods/41ed545b-f613-4408-bd1c-df5a09432e39/volumes" Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.273563 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" path="/var/lib/kubelet/pods/6ebd52ff-bacc-40c7-afc5-83df5a1c36e9/volumes" Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.274368 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9cf42d8-ed15-44dd-aaed-fbffa29417c4" path="/var/lib/kubelet/pods/c9cf42d8-ed15-44dd-aaed-fbffa29417c4/volumes" Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.274787 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6628395-d6a6-4719-b0ad-10984c3c172b" path="/var/lib/kubelet/pods/d6628395-d6a6-4719-b0ad-10984c3c172b/volumes" Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.349795 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5795557cd8-dvzqq" event={"ID":"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0","Type":"ContainerStarted","Data":"e395594ad1f61e5feb4034016d0fe14bffeb3165820e50ecb46c83448ae5661a"} Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.351483 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57899c756d-w9pc5" event={"ID":"78d85906-b78a-46eb-b5dd-4da95c1222d8","Type":"ContainerStarted","Data":"70b6c63ca13b9c59a7d033612c4fd91b9c2d11c7f06db99a50ef89d5c7c7c5da"} Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.352892 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcef8c4-762f-45c5-9087-fdfd43cd166f","Type":"ContainerStarted","Data":"d882d7425cfad82c938bd8e161f347bff453aefcb2f7ce55cc7f9962a1234809"} Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.358469 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7g59c" event={"ID":"d45d20cb-c561-4b84-b327-9b096865e8bb","Type":"ContainerStarted","Data":"4266f5dcbf67cb6303072faf9cd69cd6aabcaee0bb9544fa39ab82b24cc3c4e5"} Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.367686 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-28v5g" event={"ID":"b33c7b6e-a78a-4a10-848c-a65d01deee0b","Type":"ContainerStarted","Data":"2c50ad90e0d44eb8ed21f890b451db6090ce5a989b38e99bb109caa8d5b20956"} Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.384650 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7g59c" podStartSLOduration=4.470968065 podStartE2EDuration="32.384622459s" podCreationTimestamp="2026-02-26 11:31:16 +0000 UTC" firstStartedPulling="2026-02-26 11:31:17.810524714 +0000 UTC m=+1223.621351148" lastFinishedPulling="2026-02-26 11:31:45.724179098 +0000 UTC m=+1251.535005542" observedRunningTime="2026-02-26 11:31:48.37359064 +0000 UTC m=+1254.184417094" watchObservedRunningTime="2026-02-26 11:31:48.384622459 +0000 UTC m=+1254.195448893" Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.417129 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59dd795c56-7kv72"] Feb 26 11:31:48 crc kubenswrapper[4699]: W0226 11:31:48.445082 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1a11bcd_db42_43bf_86ca_90fafb25674e.slice/crio-f9de2c3df25a9cf2b2ee0fc78e93892cae6d343216a055e8b673d55fa947c455 WatchSource:0}: Error finding container f9de2c3df25a9cf2b2ee0fc78e93892cae6d343216a055e8b673d55fa947c455: Status 404 returned error can't find the container with id f9de2c3df25a9cf2b2ee0fc78e93892cae6d343216a055e8b673d55fa947c455 Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.827630 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:48 crc kubenswrapper[4699]: W0226 11:31:48.860051 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c856fe4_2ae4_4e5d_8112_a367658a5082.slice/crio-534a666cc64d5aef0fcdad971cc27654c030250fd2a92fa19d0af2b3628f9287 WatchSource:0}: Error finding container 534a666cc64d5aef0fcdad971cc27654c030250fd2a92fa19d0af2b3628f9287: Status 404 returned error can't find the container with id 534a666cc64d5aef0fcdad971cc27654c030250fd2a92fa19d0af2b3628f9287 Feb 26 11:31:48 crc kubenswrapper[4699]: I0226 11:31:48.978977 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-c6zn7" podUID="6ebd52ff-bacc-40c7-afc5-83df5a1c36e9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.436161 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c856fe4-2ae4-4e5d-8112-a367658a5082","Type":"ContainerStarted","Data":"534a666cc64d5aef0fcdad971cc27654c030250fd2a92fa19d0af2b3628f9287"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.439488 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-28v5g" event={"ID":"b33c7b6e-a78a-4a10-848c-a65d01deee0b","Type":"ContainerStarted","Data":"861736c6decfb2ac1c3010699205e1df4da771409780863184ec8e9136dd76db"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.462637 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dd795c56-7kv72" event={"ID":"715a80f0-cdba-439c-8a82-4838bf8f7e50","Type":"ContainerStarted","Data":"79c878075032024e487997f5af9db4e3be830d392f6df8fc08ba5dbf79596db4"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.462696 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dd795c56-7kv72" event={"ID":"715a80f0-cdba-439c-8a82-4838bf8f7e50","Type":"ContainerStarted","Data":"e5405c70871ee395752c1da1df07066a938aedcb1ac422f960283753ab469ce2"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.462706 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dd795c56-7kv72" event={"ID":"715a80f0-cdba-439c-8a82-4838bf8f7e50","Type":"ContainerStarted","Data":"6345d756a7b816036dc69f325dd74145097fc551abbeb710dfcdf0451b76e1c8"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.463023 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.465104 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-28v5g" podStartSLOduration=23.46508292 podStartE2EDuration="23.46508292s" podCreationTimestamp="2026-02-26 11:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:49.460005113 +0000 UTC m=+1255.270831557" watchObservedRunningTime="2026-02-26 11:31:49.46508292 +0000 UTC m=+1255.275909354" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.469962 4699 generic.go:334] "Generic (PLEG): container finished" podID="a1a11bcd-db42-43bf-86ca-90fafb25674e" containerID="3c60b289616323cd6352bf0b5554d4a5d5ee327ffbb6b71e27e82bb85958f651" exitCode=0 Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.470059 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" event={"ID":"a1a11bcd-db42-43bf-86ca-90fafb25674e","Type":"ContainerDied","Data":"3c60b289616323cd6352bf0b5554d4a5d5ee327ffbb6b71e27e82bb85958f651"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.470090 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" event={"ID":"a1a11bcd-db42-43bf-86ca-90fafb25674e","Type":"ContainerStarted","Data":"f9de2c3df25a9cf2b2ee0fc78e93892cae6d343216a055e8b673d55fa947c455"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.474673 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5795557cd8-dvzqq" event={"ID":"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0","Type":"ContainerStarted","Data":"1767b2abf735105a3b07ed8e99603ab13b42e9a5e09f31946a5ccb228e9ee1f0"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.474714 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5795557cd8-dvzqq" event={"ID":"15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0","Type":"ContainerStarted","Data":"aafcea7d5b89f880d565d186cbde95af7c3060362e13d7c6782b92f5d4756b45"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.497615 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59dd795c56-7kv72" podStartSLOduration=2.497582799 podStartE2EDuration="2.497582799s" podCreationTimestamp="2026-02-26 11:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:49.480820784 +0000 UTC m=+1255.291647238" watchObservedRunningTime="2026-02-26 11:31:49.497582799 +0000 UTC m=+1255.308409233" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.518238 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5795557cd8-dvzqq" podStartSLOduration=21.684463887 podStartE2EDuration="22.518216814s" podCreationTimestamp="2026-02-26 11:31:27 +0000 UTC" firstStartedPulling="2026-02-26 11:31:47.728372517 +0000 UTC m=+1253.539198951" lastFinishedPulling="2026-02-26 11:31:48.562125444 +0000 UTC m=+1254.372951878" observedRunningTime="2026-02-26 11:31:49.512300204 +0000 UTC m=+1255.323126658" watchObservedRunningTime="2026-02-26 11:31:49.518216814 +0000 UTC m=+1255.329043248" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.524053 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57899c756d-w9pc5" event={"ID":"78d85906-b78a-46eb-b5dd-4da95c1222d8","Type":"ContainerStarted","Data":"5570b961c7c2f73533bbe65fa87a9f8cc0b880e79add1f25b918377e32b9375d"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.524109 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57899c756d-w9pc5" event={"ID":"78d85906-b78a-46eb-b5dd-4da95c1222d8","Type":"ContainerStarted","Data":"de9a25314ef41f7d3414b57dcaeec2a9add4d5ecb708b80dc9af27c79856ba9b"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.545735 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcef8c4-762f-45c5-9087-fdfd43cd166f","Type":"ContainerStarted","Data":"1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.548910 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cec2d73-9ca8-4a8b-836d-efce961fbde8","Type":"ContainerStarted","Data":"2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d"} Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.586023 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-57899c756d-w9pc5" podStartSLOduration=21.498345813 podStartE2EDuration="22.586005842s" podCreationTimestamp="2026-02-26 11:31:27 +0000 UTC" firstStartedPulling="2026-02-26 11:31:47.360964357 +0000 UTC m=+1253.171790791" lastFinishedPulling="2026-02-26 11:31:48.448624386 +0000 UTC m=+1254.259450820" observedRunningTime="2026-02-26 11:31:49.581978146 +0000 UTC m=+1255.392804580" watchObservedRunningTime="2026-02-26 11:31:49.586005842 +0000 UTC m=+1255.396832296" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.633476 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6dc5565bbf-zgvcg"] Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.637066 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.638472 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dc5565bbf-zgvcg"] Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.641257 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.641449 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.734703 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-httpd-config\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.734814 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-internal-tls-certs\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.734849 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-config\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.734906 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-combined-ca-bundle\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.734933 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-ovndb-tls-certs\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.734958 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g2x8\" (UniqueName: \"kubernetes.io/projected/73fd43db-ab24-441d-9912-881ef04d4572-kube-api-access-6g2x8\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.735021 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-public-tls-certs\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.837183 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-public-tls-certs\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.837535 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-httpd-config\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.837581 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-internal-tls-certs\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.837600 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-config\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.837648 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-combined-ca-bundle\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.837667 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-ovndb-tls-certs\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.837691 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g2x8\" (UniqueName: \"kubernetes.io/projected/73fd43db-ab24-441d-9912-881ef04d4572-kube-api-access-6g2x8\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.863461 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-config\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.864154 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-httpd-config\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.864395 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-combined-ca-bundle\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.864160 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-internal-tls-certs\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.865084 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-ovndb-tls-certs\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.876971 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-public-tls-certs\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:49 crc kubenswrapper[4699]: I0226 11:31:49.888036 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g2x8\" (UniqueName: \"kubernetes.io/projected/73fd43db-ab24-441d-9912-881ef04d4572-kube-api-access-6g2x8\") pod \"neutron-6dc5565bbf-zgvcg\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:50 crc kubenswrapper[4699]: I0226 11:31:50.007864 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:50 crc kubenswrapper[4699]: I0226 11:31:50.572229 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcef8c4-762f-45c5-9087-fdfd43cd166f","Type":"ContainerStarted","Data":"1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d"} Feb 26 11:31:50 crc kubenswrapper[4699]: I0226 11:31:50.572696 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6fcef8c4-762f-45c5-9087-fdfd43cd166f" containerName="glance-log" containerID="cri-o://1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb" gracePeriod=30 Feb 26 11:31:50 crc kubenswrapper[4699]: I0226 11:31:50.572846 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6fcef8c4-762f-45c5-9087-fdfd43cd166f" containerName="glance-httpd" containerID="cri-o://1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d" gracePeriod=30 Feb 26 11:31:50 crc kubenswrapper[4699]: I0226 11:31:50.584814 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c856fe4-2ae4-4e5d-8112-a367658a5082","Type":"ContainerStarted","Data":"86f7637b447ffd260d1c029f9ebaf1fd5c0a52784cd3264877063821ada8e279"} Feb 26 11:31:50 crc kubenswrapper[4699]: I0226 11:31:50.597744 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=24.597726239 podStartE2EDuration="24.597726239s" podCreationTimestamp="2026-02-26 11:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:50.594130305 +0000 UTC m=+1256.404956769" watchObservedRunningTime="2026-02-26 11:31:50.597726239 +0000 UTC m=+1256.408552673" Feb 26 11:31:50 crc kubenswrapper[4699]: I0226 11:31:50.599198 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" event={"ID":"a1a11bcd-db42-43bf-86ca-90fafb25674e","Type":"ContainerStarted","Data":"f6df4021899217dba2f01191869995ca628d93d69016b474ac26db11ce7351f9"} Feb 26 11:31:50 crc kubenswrapper[4699]: I0226 11:31:50.600285 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:50 crc kubenswrapper[4699]: I0226 11:31:50.625041 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dc5565bbf-zgvcg"] Feb 26 11:31:50 crc kubenswrapper[4699]: I0226 11:31:50.641873 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" podStartSLOduration=3.641854994 podStartE2EDuration="3.641854994s" podCreationTimestamp="2026-02-26 11:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:50.62719826 +0000 UTC m=+1256.438024694" watchObservedRunningTime="2026-02-26 11:31:50.641854994 +0000 UTC m=+1256.452681428" Feb 26 11:31:50 crc kubenswrapper[4699]: W0226 11:31:50.827092 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73fd43db_ab24_441d_9912_881ef04d4572.slice/crio-31b648d87b25df09b072d95e938824b9e321e65ded6b88d9eed7727a038a5155 WatchSource:0}: Error finding container 31b648d87b25df09b072d95e938824b9e321e65ded6b88d9eed7727a038a5155: Status 404 returned error can't find the container with id 31b648d87b25df09b072d95e938824b9e321e65ded6b88d9eed7727a038a5155 Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.309859 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.370808 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-config-data\") pod \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.370909 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-logs\") pod \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.370936 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.370983 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-scripts\") pod \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.371060 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb644\" (UniqueName: \"kubernetes.io/projected/6fcef8c4-762f-45c5-9087-fdfd43cd166f-kube-api-access-wb644\") pod \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.371095 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-httpd-run\") pod \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.371132 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-combined-ca-bundle\") pod \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\" (UID: \"6fcef8c4-762f-45c5-9087-fdfd43cd166f\") " Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.371367 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-logs" (OuterVolumeSpecName: "logs") pod "6fcef8c4-762f-45c5-9087-fdfd43cd166f" (UID: "6fcef8c4-762f-45c5-9087-fdfd43cd166f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.371789 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.372621 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6fcef8c4-762f-45c5-9087-fdfd43cd166f" (UID: "6fcef8c4-762f-45c5-9087-fdfd43cd166f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.377621 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fcef8c4-762f-45c5-9087-fdfd43cd166f-kube-api-access-wb644" (OuterVolumeSpecName: "kube-api-access-wb644") pod "6fcef8c4-762f-45c5-9087-fdfd43cd166f" (UID: "6fcef8c4-762f-45c5-9087-fdfd43cd166f"). InnerVolumeSpecName "kube-api-access-wb644". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.379244 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-scripts" (OuterVolumeSpecName: "scripts") pod "6fcef8c4-762f-45c5-9087-fdfd43cd166f" (UID: "6fcef8c4-762f-45c5-9087-fdfd43cd166f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.380220 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "6fcef8c4-762f-45c5-9087-fdfd43cd166f" (UID: "6fcef8c4-762f-45c5-9087-fdfd43cd166f"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.404345 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fcef8c4-762f-45c5-9087-fdfd43cd166f" (UID: "6fcef8c4-762f-45c5-9087-fdfd43cd166f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.441403 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-config-data" (OuterVolumeSpecName: "config-data") pod "6fcef8c4-762f-45c5-9087-fdfd43cd166f" (UID: "6fcef8c4-762f-45c5-9087-fdfd43cd166f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.474756 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb644\" (UniqueName: \"kubernetes.io/projected/6fcef8c4-762f-45c5-9087-fdfd43cd166f-kube-api-access-wb644\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.474789 4699 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fcef8c4-762f-45c5-9087-fdfd43cd166f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.474799 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.474809 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.474842 4699 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.474855 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcef8c4-762f-45c5-9087-fdfd43cd166f-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.502672 4699 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.576358 4699 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.616062 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc5565bbf-zgvcg" event={"ID":"73fd43db-ab24-441d-9912-881ef04d4572","Type":"ContainerStarted","Data":"31b648d87b25df09b072d95e938824b9e321e65ded6b88d9eed7727a038a5155"} Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.618777 4699 generic.go:334] "Generic (PLEG): container finished" podID="6fcef8c4-762f-45c5-9087-fdfd43cd166f" containerID="1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d" exitCode=143 Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.618801 4699 generic.go:334] "Generic (PLEG): container finished" podID="6fcef8c4-762f-45c5-9087-fdfd43cd166f" containerID="1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb" exitCode=143 Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.618835 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcef8c4-762f-45c5-9087-fdfd43cd166f","Type":"ContainerDied","Data":"1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d"} Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.618857 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcef8c4-762f-45c5-9087-fdfd43cd166f","Type":"ContainerDied","Data":"1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb"} Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.618867 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcef8c4-762f-45c5-9087-fdfd43cd166f","Type":"ContainerDied","Data":"d882d7425cfad82c938bd8e161f347bff453aefcb2f7ce55cc7f9962a1234809"} Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.618882 4699 scope.go:117] "RemoveContainer" containerID="1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.618999 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.622759 4699 generic.go:334] "Generic (PLEG): container finished" podID="d45d20cb-c561-4b84-b327-9b096865e8bb" containerID="4266f5dcbf67cb6303072faf9cd69cd6aabcaee0bb9544fa39ab82b24cc3c4e5" exitCode=0 Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.623313 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7g59c" event={"ID":"d45d20cb-c561-4b84-b327-9b096865e8bb","Type":"ContainerDied","Data":"4266f5dcbf67cb6303072faf9cd69cd6aabcaee0bb9544fa39ab82b24cc3c4e5"} Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.715844 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.736810 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.746863 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:51 crc kubenswrapper[4699]: E0226 11:31:51.750378 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcef8c4-762f-45c5-9087-fdfd43cd166f" containerName="glance-log" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.750421 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcef8c4-762f-45c5-9087-fdfd43cd166f" containerName="glance-log" Feb 26 11:31:51 crc kubenswrapper[4699]: E0226 11:31:51.750446 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcef8c4-762f-45c5-9087-fdfd43cd166f" containerName="glance-httpd" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.750454 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcef8c4-762f-45c5-9087-fdfd43cd166f" containerName="glance-httpd" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.750738 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcef8c4-762f-45c5-9087-fdfd43cd166f" containerName="glance-log" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.750765 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcef8c4-762f-45c5-9087-fdfd43cd166f" containerName="glance-httpd" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.751898 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.759783 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.761835 4699 scope.go:117] "RemoveContainer" containerID="1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.776463 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.776486 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.885177 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.885671 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.885702 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-config-data\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.885757 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.885779 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q7ml\" (UniqueName: \"kubernetes.io/projected/d42e724c-224e-4c68-b5b4-b72d72d4ded8-kube-api-access-6q7ml\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.885801 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.885865 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-scripts\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.885889 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-logs\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.928789 4699 scope.go:117] "RemoveContainer" containerID="1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d" Feb 26 11:31:51 crc kubenswrapper[4699]: E0226 11:31:51.932253 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d\": container with ID starting with 1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d not found: ID does not exist" containerID="1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.932288 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d"} err="failed to get container status \"1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d\": rpc error: code = NotFound desc = could not find container \"1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d\": container with ID starting with 1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d not found: ID does not exist" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.932309 4699 scope.go:117] "RemoveContainer" containerID="1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb" Feb 26 11:31:51 crc kubenswrapper[4699]: E0226 11:31:51.935200 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb\": container with ID starting with 1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb not found: ID does not exist" containerID="1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.935255 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb"} err="failed to get container status \"1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb\": rpc error: code = NotFound desc = could not find container \"1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb\": container with ID starting with 1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb not found: ID does not exist" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.935275 4699 scope.go:117] "RemoveContainer" containerID="1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.936875 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d"} err="failed to get container status \"1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d\": rpc error: code = NotFound desc = could not find container \"1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d\": container with ID starting with 1ae84e423f8195999f8eff78d0bcd4d1b0012d9d2c7560a448febb038de27c9d not found: ID does not exist" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.936898 4699 scope.go:117] "RemoveContainer" containerID="1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.939301 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb"} err="failed to get container status \"1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb\": rpc error: code = NotFound desc = could not find container \"1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb\": container with ID starting with 1046f31d04f2776b06f0de44d09b87f8e5ad14a1988284dd4ff3881de9d6b1cb not found: ID does not exist" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.988182 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-scripts\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.988231 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-logs\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.988268 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.988300 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.988321 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-config-data\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.988366 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.988386 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7ml\" (UniqueName: \"kubernetes.io/projected/d42e724c-224e-4c68-b5b4-b72d72d4ded8-kube-api-access-6q7ml\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.988406 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.989213 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.989817 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:51 crc kubenswrapper[4699]: I0226 11:31:51.994860 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-logs\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.002692 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.004009 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.005995 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-config-data\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.023983 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q7ml\" (UniqueName: \"kubernetes.io/projected/d42e724c-224e-4c68-b5b4-b72d72d4ded8-kube-api-access-6q7ml\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.027569 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-scripts\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.060182 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " pod="openstack/glance-default-external-api-0" Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.201532 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.280827 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fcef8c4-762f-45c5-9087-fdfd43cd166f" path="/var/lib/kubelet/pods/6fcef8c4-762f-45c5-9087-fdfd43cd166f/volumes" Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.640882 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c856fe4-2ae4-4e5d-8112-a367658a5082","Type":"ContainerStarted","Data":"da66386168828e12898775322c74105b9a00cb1f54506a25a4d1fcf0d9e86a7e"} Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.641032 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1c856fe4-2ae4-4e5d-8112-a367658a5082" containerName="glance-log" containerID="cri-o://86f7637b447ffd260d1c029f9ebaf1fd5c0a52784cd3264877063821ada8e279" gracePeriod=30 Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.641642 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1c856fe4-2ae4-4e5d-8112-a367658a5082" containerName="glance-httpd" containerID="cri-o://da66386168828e12898775322c74105b9a00cb1f54506a25a4d1fcf0d9e86a7e" gracePeriod=30 Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.646535 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z6w9z" event={"ID":"47a9d008-5b7e-4866-b92b-efcb60cbfdb0","Type":"ContainerStarted","Data":"45bdc052e6dc259f4ccec396b223ed5d541f623efae769fc3c166913b1ca187a"} Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.647842 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc5565bbf-zgvcg" event={"ID":"73fd43db-ab24-441d-9912-881ef04d4572","Type":"ContainerStarted","Data":"f1b43b05d45b05ac3c54d378fa118972d9e5848b345eada8b66bb2c67ea89c63"} Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.678131 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.678096637 podStartE2EDuration="26.678096637s" podCreationTimestamp="2026-02-26 11:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:52.669477768 +0000 UTC m=+1258.480304222" watchObservedRunningTime="2026-02-26 11:31:52.678096637 +0000 UTC m=+1258.488923081" Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.699199 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-z6w9z" podStartSLOduration=3.789479583 podStartE2EDuration="36.699179116s" podCreationTimestamp="2026-02-26 11:31:16 +0000 UTC" firstStartedPulling="2026-02-26 11:31:17.976464666 +0000 UTC m=+1223.787291100" lastFinishedPulling="2026-02-26 11:31:50.886164199 +0000 UTC m=+1256.696990633" observedRunningTime="2026-02-26 11:31:52.692869483 +0000 UTC m=+1258.503695927" watchObservedRunningTime="2026-02-26 11:31:52.699179116 +0000 UTC m=+1258.510005570" Feb 26 11:31:52 crc kubenswrapper[4699]: I0226 11:31:52.793957 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.035770 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.120128 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-combined-ca-bundle\") pod \"d45d20cb-c561-4b84-b327-9b096865e8bb\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.120278 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-db-sync-config-data\") pod \"d45d20cb-c561-4b84-b327-9b096865e8bb\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.120378 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmccd\" (UniqueName: \"kubernetes.io/projected/d45d20cb-c561-4b84-b327-9b096865e8bb-kube-api-access-xmccd\") pod \"d45d20cb-c561-4b84-b327-9b096865e8bb\" (UID: \"d45d20cb-c561-4b84-b327-9b096865e8bb\") " Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.127654 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45d20cb-c561-4b84-b327-9b096865e8bb-kube-api-access-xmccd" (OuterVolumeSpecName: "kube-api-access-xmccd") pod "d45d20cb-c561-4b84-b327-9b096865e8bb" (UID: "d45d20cb-c561-4b84-b327-9b096865e8bb"). InnerVolumeSpecName "kube-api-access-xmccd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.128429 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d45d20cb-c561-4b84-b327-9b096865e8bb" (UID: "d45d20cb-c561-4b84-b327-9b096865e8bb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.144818 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d45d20cb-c561-4b84-b327-9b096865e8bb" (UID: "d45d20cb-c561-4b84-b327-9b096865e8bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.222320 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmccd\" (UniqueName: \"kubernetes.io/projected/d45d20cb-c561-4b84-b327-9b096865e8bb-kube-api-access-xmccd\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.222361 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.222376 4699 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d45d20cb-c561-4b84-b327-9b096865e8bb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.689409 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d42e724c-224e-4c68-b5b4-b72d72d4ded8","Type":"ContainerStarted","Data":"93414ab02ed9ca4e817beb6280ab1441d20975697c632df8c1a82aa6fe45a0b0"} Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.692240 4699 generic.go:334] "Generic (PLEG): container finished" podID="1c856fe4-2ae4-4e5d-8112-a367658a5082" containerID="da66386168828e12898775322c74105b9a00cb1f54506a25a4d1fcf0d9e86a7e" exitCode=0 Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.692276 4699 generic.go:334] "Generic (PLEG): container finished" podID="1c856fe4-2ae4-4e5d-8112-a367658a5082" containerID="86f7637b447ffd260d1c029f9ebaf1fd5c0a52784cd3264877063821ada8e279" exitCode=143 Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.692315 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c856fe4-2ae4-4e5d-8112-a367658a5082","Type":"ContainerDied","Data":"da66386168828e12898775322c74105b9a00cb1f54506a25a4d1fcf0d9e86a7e"} Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.692331 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c856fe4-2ae4-4e5d-8112-a367658a5082","Type":"ContainerDied","Data":"86f7637b447ffd260d1c029f9ebaf1fd5c0a52784cd3264877063821ada8e279"} Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.695766 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7g59c" event={"ID":"d45d20cb-c561-4b84-b327-9b096865e8bb","Type":"ContainerDied","Data":"a8bf2edfe1a0cab1df993c5f3eabf3a6892b72d4d33db983d7476af16ba0c19b"} Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.695822 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8bf2edfe1a0cab1df993c5f3eabf3a6892b72d4d33db983d7476af16ba0c19b" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.697988 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7g59c" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.710068 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc5565bbf-zgvcg" event={"ID":"73fd43db-ab24-441d-9912-881ef04d4572","Type":"ContainerStarted","Data":"fb5409015c0850abe735cc049f283c49118298bd94a368b4191042b9fb38469f"} Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.710763 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.733034 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6dc5565bbf-zgvcg" podStartSLOduration=4.7330120220000005 podStartE2EDuration="4.733012022s" podCreationTimestamp="2026-02-26 11:31:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:53.728643795 +0000 UTC m=+1259.539470249" watchObservedRunningTime="2026-02-26 11:31:53.733012022 +0000 UTC m=+1259.543838476" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.802585 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5fd9f445b9-bnr2j"] Feb 26 11:31:53 crc kubenswrapper[4699]: E0226 11:31:53.803034 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d45d20cb-c561-4b84-b327-9b096865e8bb" containerName="barbican-db-sync" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.803057 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="d45d20cb-c561-4b84-b327-9b096865e8bb" containerName="barbican-db-sync" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.803279 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="d45d20cb-c561-4b84-b327-9b096865e8bb" containerName="barbican-db-sync" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.804208 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.806646 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zs6cf" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.806817 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.807409 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.831326 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd9f445b9-bnr2j"] Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.877295 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8dc77f9b6-7s844"] Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.895427 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.903321 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.941055 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data-custom\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.941677 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb7t4\" (UniqueName: \"kubernetes.io/projected/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-kube-api-access-wb7t4\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.941794 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-logs\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.942083 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.942324 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-combined-ca-bundle\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:53 crc kubenswrapper[4699]: I0226 11:31:53.947674 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8dc77f9b6-7s844"] Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.040183 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-x6m79"] Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.046050 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.078725 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.078840 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-logs\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.078917 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-combined-ca-bundle\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.079078 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data-custom\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.079129 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data-custom\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.079159 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-combined-ca-bundle\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.079175 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzs9k\" (UniqueName: \"kubernetes.io/projected/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-kube-api-access-mzs9k\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.079246 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb7t4\" (UniqueName: \"kubernetes.io/projected/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-kube-api-access-wb7t4\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.079317 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-logs\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.055456 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" podUID="a1a11bcd-db42-43bf-86ca-90fafb25674e" containerName="dnsmasq-dns" containerID="cri-o://f6df4021899217dba2f01191869995ca628d93d69016b474ac26db11ce7351f9" gracePeriod=10 Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.112938 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c455f6f5b-f25td"] Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.114393 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.142187 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gg27w"] Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.143743 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.163200 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c455f6f5b-f25td"] Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181221 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqdfx\" (UniqueName: \"kubernetes.io/projected/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-kube-api-access-sqdfx\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181282 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data-custom\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181306 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181326 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181402 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181420 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-logs\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181440 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-svc\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181471 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181503 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-config\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181519 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcgz\" (UniqueName: \"kubernetes.io/projected/21ee9717-aaae-4511-9cee-fb022818e57d-kube-api-access-mlcgz\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181544 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-logs\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181559 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-combined-ca-bundle\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181579 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data-custom\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181611 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-combined-ca-bundle\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181628 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzs9k\" (UniqueName: \"kubernetes.io/projected/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-kube-api-access-mzs9k\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.181646 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.186030 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gg27w"] Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.247349 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.250570 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-logs\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.265942 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.267593 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.267806 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-logs\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.268258 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data-custom\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.281918 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data-custom\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.283149 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-svc\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.283223 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.283258 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-config\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.283286 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlcgz\" (UniqueName: \"kubernetes.io/projected/21ee9717-aaae-4511-9cee-fb022818e57d-kube-api-access-mlcgz\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.283329 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-logs\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.283352 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-combined-ca-bundle\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.283393 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.283445 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqdfx\" (UniqueName: \"kubernetes.io/projected/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-kube-api-access-sqdfx\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.283467 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data-custom\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.283488 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.283505 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.284966 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-svc\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.285209 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-logs\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.289066 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.293487 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-combined-ca-bundle\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.294003 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb7t4\" (UniqueName: \"kubernetes.io/projected/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-kube-api-access-wb7t4\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.299996 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzs9k\" (UniqueName: \"kubernetes.io/projected/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-kube-api-access-mzs9k\") pod \"barbican-keystone-listener-8dc77f9b6-7s844\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.301228 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.304450 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-combined-ca-bundle\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.301624 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-combined-ca-bundle\") pod \"barbican-worker-5fd9f445b9-bnr2j\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.309155 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.314171 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.314986 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-config\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.317669 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.326177 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data-custom\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.340195 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqdfx\" (UniqueName: \"kubernetes.io/projected/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-kube-api-access-sqdfx\") pod \"barbican-api-7c455f6f5b-f25td\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.341156 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlcgz\" (UniqueName: \"kubernetes.io/projected/21ee9717-aaae-4511-9cee-fb022818e57d-kube-api-access-mlcgz\") pod \"dnsmasq-dns-688c87cc99-gg27w\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.432678 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.549399 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.549904 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.781078 4699 generic.go:334] "Generic (PLEG): container finished" podID="a1a11bcd-db42-43bf-86ca-90fafb25674e" containerID="f6df4021899217dba2f01191869995ca628d93d69016b474ac26db11ce7351f9" exitCode=0 Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.781213 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" event={"ID":"a1a11bcd-db42-43bf-86ca-90fafb25674e","Type":"ContainerDied","Data":"f6df4021899217dba2f01191869995ca628d93d69016b474ac26db11ce7351f9"} Feb 26 11:31:54 crc kubenswrapper[4699]: I0226 11:31:54.793557 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d42e724c-224e-4c68-b5b4-b72d72d4ded8","Type":"ContainerStarted","Data":"6389553c6ab212c6bdd09de56f8a6c0bc3ab110475816ef41318a3d8e60aa198"} Feb 26 11:31:55 crc kubenswrapper[4699]: I0226 11:31:55.804008 4699 generic.go:334] "Generic (PLEG): container finished" podID="b33c7b6e-a78a-4a10-848c-a65d01deee0b" containerID="861736c6decfb2ac1c3010699205e1df4da771409780863184ec8e9136dd76db" exitCode=0 Feb 26 11:31:55 crc kubenswrapper[4699]: I0226 11:31:55.804093 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-28v5g" event={"ID":"b33c7b6e-a78a-4a10-848c-a65d01deee0b","Type":"ContainerDied","Data":"861736c6decfb2ac1c3010699205e1df4da771409780863184ec8e9136dd76db"} Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.446420 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.521250 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-httpd-run\") pod \"1c856fe4-2ae4-4e5d-8112-a367658a5082\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.521349 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1c856fe4-2ae4-4e5d-8112-a367658a5082\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.521374 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-config-data\") pod \"1c856fe4-2ae4-4e5d-8112-a367658a5082\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.521536 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-scripts\") pod \"1c856fe4-2ae4-4e5d-8112-a367658a5082\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.521628 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-combined-ca-bundle\") pod \"1c856fe4-2ae4-4e5d-8112-a367658a5082\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.521644 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1c856fe4-2ae4-4e5d-8112-a367658a5082" (UID: "1c856fe4-2ae4-4e5d-8112-a367658a5082"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.521667 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-698q6\" (UniqueName: \"kubernetes.io/projected/1c856fe4-2ae4-4e5d-8112-a367658a5082-kube-api-access-698q6\") pod \"1c856fe4-2ae4-4e5d-8112-a367658a5082\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.521684 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-logs\") pod \"1c856fe4-2ae4-4e5d-8112-a367658a5082\" (UID: \"1c856fe4-2ae4-4e5d-8112-a367658a5082\") " Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.522211 4699 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.522463 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-logs" (OuterVolumeSpecName: "logs") pod "1c856fe4-2ae4-4e5d-8112-a367658a5082" (UID: "1c856fe4-2ae4-4e5d-8112-a367658a5082"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.527240 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c856fe4-2ae4-4e5d-8112-a367658a5082-kube-api-access-698q6" (OuterVolumeSpecName: "kube-api-access-698q6") pod "1c856fe4-2ae4-4e5d-8112-a367658a5082" (UID: "1c856fe4-2ae4-4e5d-8112-a367658a5082"). InnerVolumeSpecName "kube-api-access-698q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.529024 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "1c856fe4-2ae4-4e5d-8112-a367658a5082" (UID: "1c856fe4-2ae4-4e5d-8112-a367658a5082"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.533746 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-scripts" (OuterVolumeSpecName: "scripts") pod "1c856fe4-2ae4-4e5d-8112-a367658a5082" (UID: "1c856fe4-2ae4-4e5d-8112-a367658a5082"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.566885 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c856fe4-2ae4-4e5d-8112-a367658a5082" (UID: "1c856fe4-2ae4-4e5d-8112-a367658a5082"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.599212 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-config-data" (OuterVolumeSpecName: "config-data") pod "1c856fe4-2ae4-4e5d-8112-a367658a5082" (UID: "1c856fe4-2ae4-4e5d-8112-a367658a5082"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.627296 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.627338 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-698q6\" (UniqueName: \"kubernetes.io/projected/1c856fe4-2ae4-4e5d-8112-a367658a5082-kube-api-access-698q6\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.627352 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c856fe4-2ae4-4e5d-8112-a367658a5082-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.627393 4699 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.627407 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.627417 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c856fe4-2ae4-4e5d-8112-a367658a5082-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.645877 4699 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.729689 4699 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.813603 4699 generic.go:334] "Generic (PLEG): container finished" podID="47a9d008-5b7e-4866-b92b-efcb60cbfdb0" containerID="45bdc052e6dc259f4ccec396b223ed5d541f623efae769fc3c166913b1ca187a" exitCode=0 Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.813813 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z6w9z" event={"ID":"47a9d008-5b7e-4866-b92b-efcb60cbfdb0","Type":"ContainerDied","Data":"45bdc052e6dc259f4ccec396b223ed5d541f623efae769fc3c166913b1ca187a"} Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.816265 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c856fe4-2ae4-4e5d-8112-a367658a5082","Type":"ContainerDied","Data":"534a666cc64d5aef0fcdad971cc27654c030250fd2a92fa19d0af2b3628f9287"} Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.816283 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.816305 4699 scope.go:117] "RemoveContainer" containerID="da66386168828e12898775322c74105b9a00cb1f54506a25a4d1fcf0d9e86a7e" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.885800 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.893585 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.908718 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:56 crc kubenswrapper[4699]: E0226 11:31:56.909211 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c856fe4-2ae4-4e5d-8112-a367658a5082" containerName="glance-httpd" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.909232 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c856fe4-2ae4-4e5d-8112-a367658a5082" containerName="glance-httpd" Feb 26 11:31:56 crc kubenswrapper[4699]: E0226 11:31:56.909242 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c856fe4-2ae4-4e5d-8112-a367658a5082" containerName="glance-log" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.909249 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c856fe4-2ae4-4e5d-8112-a367658a5082" containerName="glance-log" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.909452 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c856fe4-2ae4-4e5d-8112-a367658a5082" containerName="glance-httpd" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.909473 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c856fe4-2ae4-4e5d-8112-a367658a5082" containerName="glance-log" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.910491 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.913591 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.916335 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.934564 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-logs\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.934616 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.934635 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.934655 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.934673 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.934698 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.934738 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c2rv\" (UniqueName: \"kubernetes.io/projected/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-kube-api-access-7c2rv\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.934780 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:56 crc kubenswrapper[4699]: I0226 11:31:56.938574 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.008964 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-84b6bf6c74-r47qt"] Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.010634 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.013540 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.013820 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.018397 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84b6bf6c74-r47qt"] Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.039712 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.039790 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.039817 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.039858 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.039891 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.039958 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c2rv\" (UniqueName: \"kubernetes.io/projected/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-kube-api-access-7c2rv\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.040030 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.040255 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-logs\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.046681 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-logs\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.046773 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.046872 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.052635 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.055410 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.068712 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.075767 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c2rv\" (UniqueName: \"kubernetes.io/projected/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-kube-api-access-7c2rv\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.080888 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.088742 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.148256 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85rxv\" (UniqueName: \"kubernetes.io/projected/0876db8f-e235-40d9-b4a5-718097cdf02c-kube-api-access-85rxv\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.148328 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-internal-tls-certs\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.148372 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-combined-ca-bundle\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.148435 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0876db8f-e235-40d9-b4a5-718097cdf02c-logs\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.148474 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.148551 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-public-tls-certs\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.148584 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data-custom\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.240580 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.250480 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-internal-tls-certs\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.250528 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-combined-ca-bundle\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.250581 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0876db8f-e235-40d9-b4a5-718097cdf02c-logs\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.250619 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.250707 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-public-tls-certs\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.250724 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data-custom\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.250765 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85rxv\" (UniqueName: \"kubernetes.io/projected/0876db8f-e235-40d9-b4a5-718097cdf02c-kube-api-access-85rxv\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.251511 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0876db8f-e235-40d9-b4a5-718097cdf02c-logs\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.255004 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-public-tls-certs\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.255639 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-internal-tls-certs\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.256174 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-combined-ca-bundle\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.256895 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.258093 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data-custom\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.267876 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85rxv\" (UniqueName: \"kubernetes.io/projected/0876db8f-e235-40d9-b4a5-718097cdf02c-kube-api-access-85rxv\") pod \"barbican-api-84b6bf6c74-r47qt\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:57 crc kubenswrapper[4699]: I0226 11:31:57.347268 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.131673 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.132230 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.212477 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.213653 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.279543 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c856fe4-2ae4-4e5d-8112-a367658a5082" path="/var/lib/kubelet/pods/1c856fe4-2ae4-4e5d-8112-a367658a5082/volumes" Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.832026 4699 scope.go:117] "RemoveContainer" containerID="86f7637b447ffd260d1c029f9ebaf1fd5c0a52784cd3264877063821ada8e279" Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.843515 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" event={"ID":"a1a11bcd-db42-43bf-86ca-90fafb25674e","Type":"ContainerDied","Data":"f9de2c3df25a9cf2b2ee0fc78e93892cae6d343216a055e8b673d55fa947c455"} Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.843583 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9de2c3df25a9cf2b2ee0fc78e93892cae6d343216a055e8b673d55fa947c455" Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.845710 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z6w9z" event={"ID":"47a9d008-5b7e-4866-b92b-efcb60cbfdb0","Type":"ContainerDied","Data":"920ac430b947cdc3b32b9b6348a1213ef17636f95a2668e9fab680798b77b616"} Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.845837 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="920ac430b947cdc3b32b9b6348a1213ef17636f95a2668e9fab680798b77b616" Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.860037 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-28v5g" event={"ID":"b33c7b6e-a78a-4a10-848c-a65d01deee0b","Type":"ContainerDied","Data":"2c50ad90e0d44eb8ed21f890b451db6090ce5a989b38e99bb109caa8d5b20956"} Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.860140 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c50ad90e0d44eb8ed21f890b451db6090ce5a989b38e99bb109caa8d5b20956" Feb 26 11:31:58 crc kubenswrapper[4699]: I0226 11:31:58.996611 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.019162 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.094360 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.117691 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vznpb\" (UniqueName: \"kubernetes.io/projected/a1a11bcd-db42-43bf-86ca-90fafb25674e-kube-api-access-vznpb\") pod \"a1a11bcd-db42-43bf-86ca-90fafb25674e\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.117772 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-sb\") pod \"a1a11bcd-db42-43bf-86ca-90fafb25674e\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.117801 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89dst\" (UniqueName: \"kubernetes.io/projected/b33c7b6e-a78a-4a10-848c-a65d01deee0b-kube-api-access-89dst\") pod \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.117837 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-scripts\") pod \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.117876 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-combined-ca-bundle\") pod \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.117920 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-config-data\") pod \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.117939 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-fernet-keys\") pod \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.118044 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-svc\") pod \"a1a11bcd-db42-43bf-86ca-90fafb25674e\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.118088 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-swift-storage-0\") pod \"a1a11bcd-db42-43bf-86ca-90fafb25674e\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.118104 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-credential-keys\") pod \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\" (UID: \"b33c7b6e-a78a-4a10-848c-a65d01deee0b\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.118144 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-config\") pod \"a1a11bcd-db42-43bf-86ca-90fafb25674e\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.118162 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-nb\") pod \"a1a11bcd-db42-43bf-86ca-90fafb25674e\" (UID: \"a1a11bcd-db42-43bf-86ca-90fafb25674e\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.139127 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1a11bcd-db42-43bf-86ca-90fafb25674e-kube-api-access-vznpb" (OuterVolumeSpecName: "kube-api-access-vznpb") pod "a1a11bcd-db42-43bf-86ca-90fafb25674e" (UID: "a1a11bcd-db42-43bf-86ca-90fafb25674e"). InnerVolumeSpecName "kube-api-access-vznpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.140924 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b33c7b6e-a78a-4a10-848c-a65d01deee0b" (UID: "b33c7b6e-a78a-4a10-848c-a65d01deee0b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.140985 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-scripts" (OuterVolumeSpecName: "scripts") pod "b33c7b6e-a78a-4a10-848c-a65d01deee0b" (UID: "b33c7b6e-a78a-4a10-848c-a65d01deee0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.141194 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b33c7b6e-a78a-4a10-848c-a65d01deee0b" (UID: "b33c7b6e-a78a-4a10-848c-a65d01deee0b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.141211 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b33c7b6e-a78a-4a10-848c-a65d01deee0b-kube-api-access-89dst" (OuterVolumeSpecName: "kube-api-access-89dst") pod "b33c7b6e-a78a-4a10-848c-a65d01deee0b" (UID: "b33c7b6e-a78a-4a10-848c-a65d01deee0b"). InnerVolumeSpecName "kube-api-access-89dst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.222667 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-scripts\") pod \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.222814 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99w54\" (UniqueName: \"kubernetes.io/projected/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-kube-api-access-99w54\") pod \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.222881 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-combined-ca-bundle\") pod \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.222948 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-config-data\") pod \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.222980 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-logs\") pod \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\" (UID: \"47a9d008-5b7e-4866-b92b-efcb60cbfdb0\") " Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.223444 4699 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.223460 4699 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.223470 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vznpb\" (UniqueName: \"kubernetes.io/projected/a1a11bcd-db42-43bf-86ca-90fafb25674e-kube-api-access-vznpb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.223478 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89dst\" (UniqueName: \"kubernetes.io/projected/b33c7b6e-a78a-4a10-848c-a65d01deee0b-kube-api-access-89dst\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.223489 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.224031 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-logs" (OuterVolumeSpecName: "logs") pod "47a9d008-5b7e-4866-b92b-efcb60cbfdb0" (UID: "47a9d008-5b7e-4866-b92b-efcb60cbfdb0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.228095 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-kube-api-access-99w54" (OuterVolumeSpecName: "kube-api-access-99w54") pod "47a9d008-5b7e-4866-b92b-efcb60cbfdb0" (UID: "47a9d008-5b7e-4866-b92b-efcb60cbfdb0"). InnerVolumeSpecName "kube-api-access-99w54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.233333 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-scripts" (OuterVolumeSpecName: "scripts") pod "47a9d008-5b7e-4866-b92b-efcb60cbfdb0" (UID: "47a9d008-5b7e-4866-b92b-efcb60cbfdb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.233429 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b33c7b6e-a78a-4a10-848c-a65d01deee0b" (UID: "b33c7b6e-a78a-4a10-848c-a65d01deee0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.245157 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a1a11bcd-db42-43bf-86ca-90fafb25674e" (UID: "a1a11bcd-db42-43bf-86ca-90fafb25674e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.278266 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a1a11bcd-db42-43bf-86ca-90fafb25674e" (UID: "a1a11bcd-db42-43bf-86ca-90fafb25674e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.294991 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-config-data" (OuterVolumeSpecName: "config-data") pod "b33c7b6e-a78a-4a10-848c-a65d01deee0b" (UID: "b33c7b6e-a78a-4a10-848c-a65d01deee0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.296776 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1a11bcd-db42-43bf-86ca-90fafb25674e" (UID: "a1a11bcd-db42-43bf-86ca-90fafb25674e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.322265 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47a9d008-5b7e-4866-b92b-efcb60cbfdb0" (UID: "47a9d008-5b7e-4866-b92b-efcb60cbfdb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.325447 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-config-data" (OuterVolumeSpecName: "config-data") pod "47a9d008-5b7e-4866-b92b-efcb60cbfdb0" (UID: "47a9d008-5b7e-4866-b92b-efcb60cbfdb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.325768 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.325797 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.325808 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.325816 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.325827 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.325835 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.325844 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.325854 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99w54\" (UniqueName: \"kubernetes.io/projected/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-kube-api-access-99w54\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.325863 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33c7b6e-a78a-4a10-848c-a65d01deee0b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.325871 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a9d008-5b7e-4866-b92b-efcb60cbfdb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.359709 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a1a11bcd-db42-43bf-86ca-90fafb25674e" (UID: "a1a11bcd-db42-43bf-86ca-90fafb25674e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.368897 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-config" (OuterVolumeSpecName: "config") pod "a1a11bcd-db42-43bf-86ca-90fafb25674e" (UID: "a1a11bcd-db42-43bf-86ca-90fafb25674e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.440898 4699 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.440945 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a11bcd-db42-43bf-86ca-90fafb25674e-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.485958 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8dc77f9b6-7s844"] Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.665242 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gg27w"] Feb 26 11:31:59 crc kubenswrapper[4699]: W0226 11:31:59.678771 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21ee9717_aaae_4511_9cee_fb022818e57d.slice/crio-729fa2fe733b6553118627e4796e6e00ed271782aa89de919499cdfc619cd740 WatchSource:0}: Error finding container 729fa2fe733b6553118627e4796e6e00ed271782aa89de919499cdfc619cd740: Status 404 returned error can't find the container with id 729fa2fe733b6553118627e4796e6e00ed271782aa89de919499cdfc619cd740 Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.889683 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" event={"ID":"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b","Type":"ContainerStarted","Data":"64d085c2e0471990e9f05ef5274018eb074bf0ab7cec6ddaf7afcafa1dae6331"} Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.907781 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cec2d73-9ca8-4a8b-836d-efce961fbde8","Type":"ContainerStarted","Data":"5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2"} Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.916128 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z6w9z" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.918897 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-28v5g" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.918940 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" Feb 26 11:31:59 crc kubenswrapper[4699]: I0226 11:31:59.918996 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" event={"ID":"21ee9717-aaae-4511-9cee-fb022818e57d","Type":"ContainerStarted","Data":"729fa2fe733b6553118627e4796e6e00ed271782aa89de919499cdfc619cd740"} Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.023088 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-x6m79"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.053668 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-x6m79"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.064677 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c455f6f5b-f25td"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.072329 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd9f445b9-bnr2j"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.191379 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84b6bf6c74-r47qt"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.210985 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-67d4f89fb9-65kmq"] Feb 26 11:32:00 crc kubenswrapper[4699]: E0226 11:32:00.211508 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a9d008-5b7e-4866-b92b-efcb60cbfdb0" containerName="placement-db-sync" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.211526 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a9d008-5b7e-4866-b92b-efcb60cbfdb0" containerName="placement-db-sync" Feb 26 11:32:00 crc kubenswrapper[4699]: E0226 11:32:00.211545 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a11bcd-db42-43bf-86ca-90fafb25674e" containerName="init" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.211552 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a11bcd-db42-43bf-86ca-90fafb25674e" containerName="init" Feb 26 11:32:00 crc kubenswrapper[4699]: E0226 11:32:00.211572 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a11bcd-db42-43bf-86ca-90fafb25674e" containerName="dnsmasq-dns" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.211580 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a11bcd-db42-43bf-86ca-90fafb25674e" containerName="dnsmasq-dns" Feb 26 11:32:00 crc kubenswrapper[4699]: E0226 11:32:00.211613 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33c7b6e-a78a-4a10-848c-a65d01deee0b" containerName="keystone-bootstrap" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.211622 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33c7b6e-a78a-4a10-848c-a65d01deee0b" containerName="keystone-bootstrap" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.211821 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33c7b6e-a78a-4a10-848c-a65d01deee0b" containerName="keystone-bootstrap" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.211861 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a11bcd-db42-43bf-86ca-90fafb25674e" containerName="dnsmasq-dns" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.211879 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a9d008-5b7e-4866-b92b-efcb60cbfdb0" containerName="placement-db-sync" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.212714 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.215346 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbntt" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.215538 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.215701 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.215804 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.215899 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.216084 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.220529 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67d4f89fb9-65kmq"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.257646 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-internal-tls-certs\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: W0226 11:32:00.257684 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03f1bc3b_c587_4c47_bbc2_3dca2240d30c.slice/crio-1807142ca5ad23a6f297805a63bd9002973ffc620dfe243a1b7bd07573b9a98e WatchSource:0}: Error finding container 1807142ca5ad23a6f297805a63bd9002973ffc620dfe243a1b7bd07573b9a98e: Status 404 returned error can't find the container with id 1807142ca5ad23a6f297805a63bd9002973ffc620dfe243a1b7bd07573b9a98e Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.257739 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-scripts\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.257773 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-credential-keys\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.257808 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-combined-ca-bundle\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.257835 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-public-tls-certs\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.257899 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hznp\" (UniqueName: \"kubernetes.io/projected/5d9e1983-3363-4542-a5f0-deb132ea6994-kube-api-access-4hznp\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.257955 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-config-data\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.257987 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-fernet-keys\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.360876 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hznp\" (UniqueName: \"kubernetes.io/projected/5d9e1983-3363-4542-a5f0-deb132ea6994-kube-api-access-4hznp\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.360923 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-config-data\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.360951 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-fernet-keys\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.361094 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-internal-tls-certs\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.361193 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-scripts\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.361233 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-credential-keys\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.361284 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-combined-ca-bundle\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.361310 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-public-tls-certs\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.375144 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-config-data\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.375510 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-fernet-keys\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.381834 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a11bcd-db42-43bf-86ca-90fafb25674e" path="/var/lib/kubelet/pods/a1a11bcd-db42-43bf-86ca-90fafb25674e/volumes" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.388903 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535092-t7q4h"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.402939 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-credential-keys\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.404782 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hznp\" (UniqueName: \"kubernetes.io/projected/5d9e1983-3363-4542-a5f0-deb132ea6994-kube-api-access-4hznp\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.404866 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-public-tls-certs\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.406713 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-combined-ca-bundle\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.409413 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535092-t7q4h"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.409493 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.410282 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535092-t7q4h" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.418455 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78f86c6bf8-r6wpf"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.420195 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.420470 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.421259 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.422852 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-internal-tls-certs\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.428906 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d9e1983-3363-4542-a5f0-deb132ea6994-scripts\") pod \"keystone-67d4f89fb9-65kmq\" (UID: \"5d9e1983-3363-4542-a5f0-deb132ea6994\") " pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.433272 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.447495 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.447539 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.447369 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2ghn5" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.448736 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.449959 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.470820 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78f86c6bf8-r6wpf"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.576766 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-logs\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.576835 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-public-tls-certs\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.576857 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-internal-tls-certs\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.576916 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv6sq\" (UniqueName: \"kubernetes.io/projected/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-kube-api-access-wv6sq\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.576949 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpjjs\" (UniqueName: \"kubernetes.io/projected/343bb829-035d-4834-a0c4-d9a61c11a2ee-kube-api-access-hpjjs\") pod \"auto-csr-approver-29535092-t7q4h\" (UID: \"343bb829-035d-4834-a0c4-d9a61c11a2ee\") " pod="openshift-infra/auto-csr-approver-29535092-t7q4h" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.577031 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-config-data\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.577055 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-scripts\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.577158 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-combined-ca-bundle\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.628008 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d4878dd78-qpvzg"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.640239 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d4878dd78-qpvzg"] Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.640369 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.641480 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.683954 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpjjs\" (UniqueName: \"kubernetes.io/projected/343bb829-035d-4834-a0c4-d9a61c11a2ee-kube-api-access-hpjjs\") pod \"auto-csr-approver-29535092-t7q4h\" (UID: \"343bb829-035d-4834-a0c4-d9a61c11a2ee\") " pod="openshift-infra/auto-csr-approver-29535092-t7q4h" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.684037 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-config-data\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.684075 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-scripts\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.684135 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-combined-ca-bundle\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.684234 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-logs\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.684269 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-public-tls-certs\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.684300 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-internal-tls-certs\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.684365 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv6sq\" (UniqueName: \"kubernetes.io/projected/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-kube-api-access-wv6sq\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.685856 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-logs\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.690185 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-combined-ca-bundle\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.692471 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-scripts\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.697864 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-config-data\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.700510 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-public-tls-certs\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.702456 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-internal-tls-certs\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.707185 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv6sq\" (UniqueName: \"kubernetes.io/projected/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-kube-api-access-wv6sq\") pod \"placement-78f86c6bf8-r6wpf\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.708383 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpjjs\" (UniqueName: \"kubernetes.io/projected/343bb829-035d-4834-a0c4-d9a61c11a2ee-kube-api-access-hpjjs\") pod \"auto-csr-approver-29535092-t7q4h\" (UID: \"343bb829-035d-4834-a0c4-d9a61c11a2ee\") " pod="openshift-infra/auto-csr-approver-29535092-t7q4h" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.729869 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535092-t7q4h" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.775796 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.788376 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-internal-tls-certs\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.788473 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-config-data\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.788517 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-public-tls-certs\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.788712 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-scripts\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.788935 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7700bd0-21d8-4b96-9753-2619443038a3-logs\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.788988 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-combined-ca-bundle\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.789102 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bf9k\" (UniqueName: \"kubernetes.io/projected/b7700bd0-21d8-4b96-9753-2619443038a3-kube-api-access-4bf9k\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.891077 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7700bd0-21d8-4b96-9753-2619443038a3-logs\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.891453 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-combined-ca-bundle\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.891575 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bf9k\" (UniqueName: \"kubernetes.io/projected/b7700bd0-21d8-4b96-9753-2619443038a3-kube-api-access-4bf9k\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.891614 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-internal-tls-certs\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.891734 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-config-data\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.891797 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-public-tls-certs\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.891877 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-scripts\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.894250 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7700bd0-21d8-4b96-9753-2619443038a3-logs\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.902688 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-scripts\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.907904 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-config-data\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.908667 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-internal-tls-certs\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.913383 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-combined-ca-bundle\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.915608 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7700bd0-21d8-4b96-9753-2619443038a3-public-tls-certs\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.924340 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bf9k\" (UniqueName: \"kubernetes.io/projected/b7700bd0-21d8-4b96-9753-2619443038a3-kube-api-access-4bf9k\") pod \"placement-d4878dd78-qpvzg\" (UID: \"b7700bd0-21d8-4b96-9753-2619443038a3\") " pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.943407 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03f1bc3b-c587-4c47-bbc2-3dca2240d30c","Type":"ContainerStarted","Data":"1807142ca5ad23a6f297805a63bd9002973ffc620dfe243a1b7bd07573b9a98e"} Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.946663 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c455f6f5b-f25td" event={"ID":"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4","Type":"ContainerStarted","Data":"1ea84a2c17c70c4722d76da041934ea3f75af2c65494a5778df946ebb8677371"} Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.950737 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b6bf6c74-r47qt" event={"ID":"0876db8f-e235-40d9-b4a5-718097cdf02c","Type":"ContainerStarted","Data":"079bbabce73c111db6093e96198997a034c6927d448d649260507e6ce83573d4"} Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.952255 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" event={"ID":"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f","Type":"ContainerStarted","Data":"67791da9269463758e09bb6a9c9c2f13b834b1a262a1121df8a5fa0b5b6170cf"} Feb 26 11:32:00 crc kubenswrapper[4699]: I0226 11:32:00.969059 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:01 crc kubenswrapper[4699]: I0226 11:32:01.206542 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67d4f89fb9-65kmq"] Feb 26 11:32:01 crc kubenswrapper[4699]: W0226 11:32:01.233390 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d9e1983_3363_4542_a5f0_deb132ea6994.slice/crio-fc01764fcf706afe2883d1c24a362dbebfdc64389d0d5a484a6dc51e9ddb78de WatchSource:0}: Error finding container fc01764fcf706afe2883d1c24a362dbebfdc64389d0d5a484a6dc51e9ddb78de: Status 404 returned error can't find the container with id fc01764fcf706afe2883d1c24a362dbebfdc64389d0d5a484a6dc51e9ddb78de Feb 26 11:32:01 crc kubenswrapper[4699]: I0226 11:32:01.427304 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535092-t7q4h"] Feb 26 11:32:01 crc kubenswrapper[4699]: I0226 11:32:01.485979 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78f86c6bf8-r6wpf"] Feb 26 11:32:01 crc kubenswrapper[4699]: W0226 11:32:01.487012 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod343bb829_035d_4834_a0c4_d9a61c11a2ee.slice/crio-79a1129abbd02611508e0ac75a09840d716301f132a8b45d0ccf4b2b830b608a WatchSource:0}: Error finding container 79a1129abbd02611508e0ac75a09840d716301f132a8b45d0ccf4b2b830b608a: Status 404 returned error can't find the container with id 79a1129abbd02611508e0ac75a09840d716301f132a8b45d0ccf4b2b830b608a Feb 26 11:32:01 crc kubenswrapper[4699]: W0226 11:32:01.555970 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9ef11cc_2a83_4f0e_b117_4be10a1c0fee.slice/crio-086804bba8040fba8ead2adc36df764be92ea222ee4962825dd9a4df869adac5 WatchSource:0}: Error finding container 086804bba8040fba8ead2adc36df764be92ea222ee4962825dd9a4df869adac5: Status 404 returned error can't find the container with id 086804bba8040fba8ead2adc36df764be92ea222ee4962825dd9a4df869adac5 Feb 26 11:32:01 crc kubenswrapper[4699]: I0226 11:32:01.590050 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d4878dd78-qpvzg"] Feb 26 11:32:01 crc kubenswrapper[4699]: W0226 11:32:01.594203 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7700bd0_21d8_4b96_9753_2619443038a3.slice/crio-2a9002e29912f771e5283986ce768c9698ed7c635c879f18a8a62771838e81bf WatchSource:0}: Error finding container 2a9002e29912f771e5283986ce768c9698ed7c635c879f18a8a62771838e81bf: Status 404 returned error can't find the container with id 2a9002e29912f771e5283986ce768c9698ed7c635c879f18a8a62771838e81bf Feb 26 11:32:01 crc kubenswrapper[4699]: I0226 11:32:01.962913 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67d4f89fb9-65kmq" event={"ID":"5d9e1983-3363-4542-a5f0-deb132ea6994","Type":"ContainerStarted","Data":"fc01764fcf706afe2883d1c24a362dbebfdc64389d0d5a484a6dc51e9ddb78de"} Feb 26 11:32:01 crc kubenswrapper[4699]: I0226 11:32:01.963950 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535092-t7q4h" event={"ID":"343bb829-035d-4834-a0c4-d9a61c11a2ee","Type":"ContainerStarted","Data":"79a1129abbd02611508e0ac75a09840d716301f132a8b45d0ccf4b2b830b608a"} Feb 26 11:32:01 crc kubenswrapper[4699]: I0226 11:32:01.965056 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4878dd78-qpvzg" event={"ID":"b7700bd0-21d8-4b96-9753-2619443038a3","Type":"ContainerStarted","Data":"2a9002e29912f771e5283986ce768c9698ed7c635c879f18a8a62771838e81bf"} Feb 26 11:32:01 crc kubenswrapper[4699]: I0226 11:32:01.966055 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78f86c6bf8-r6wpf" event={"ID":"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee","Type":"ContainerStarted","Data":"086804bba8040fba8ead2adc36df764be92ea222ee4962825dd9a4df869adac5"} Feb 26 11:32:02 crc kubenswrapper[4699]: I0226 11:32:02.568415 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc5c4795-x6m79" podUID="a1a11bcd-db42-43bf-86ca-90fafb25674e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.156:5353: i/o timeout" Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.017609 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c455f6f5b-f25td" event={"ID":"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4","Type":"ContainerStarted","Data":"68ffeebcc8219b513faf07851f7ec0e29081e29acdefcbc0d3a8bcb52016ff06"} Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.029008 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67d4f89fb9-65kmq" event={"ID":"5d9e1983-3363-4542-a5f0-deb132ea6994","Type":"ContainerStarted","Data":"c6475f92b33fc20f6dc96976597c480d28deff6e28b83fbd4087da8b404d81f6"} Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.029660 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.034678 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4878dd78-qpvzg" event={"ID":"b7700bd0-21d8-4b96-9753-2619443038a3","Type":"ContainerStarted","Data":"c2abd1a49bd60a70897e46e829fede9f4c909194d03a92818f99d204012b49c8"} Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.058951 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78f86c6bf8-r6wpf" event={"ID":"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee","Type":"ContainerStarted","Data":"cfb24c179a421c25f4518e3a61ebbebf3cbb893957f33df4ff29a903e7099944"} Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.068537 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-67d4f89fb9-65kmq" podStartSLOduration=3.068520147 podStartE2EDuration="3.068520147s" podCreationTimestamp="2026-02-26 11:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:03.055564152 +0000 UTC m=+1268.866390616" watchObservedRunningTime="2026-02-26 11:32:03.068520147 +0000 UTC m=+1268.879346581" Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.070440 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03f1bc3b-c587-4c47-bbc2-3dca2240d30c","Type":"ContainerStarted","Data":"96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558"} Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.084889 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d42e724c-224e-4c68-b5b4-b72d72d4ded8","Type":"ContainerStarted","Data":"178382a3de582f32c10d899adfcff30626331736ab73b2469c8ca1b37fab0c4c"} Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.094922 4699 generic.go:334] "Generic (PLEG): container finished" podID="21ee9717-aaae-4511-9cee-fb022818e57d" containerID="92cf2b1cba562648cb5236aef5b4582d6ded613391d9217a2ee3e5335a2f73cf" exitCode=0 Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.095003 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" event={"ID":"21ee9717-aaae-4511-9cee-fb022818e57d","Type":"ContainerDied","Data":"92cf2b1cba562648cb5236aef5b4582d6ded613391d9217a2ee3e5335a2f73cf"} Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.099812 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f49xd" event={"ID":"8426fd89-9eba-46fa-8611-e98cc7636b41","Type":"ContainerStarted","Data":"2cec29afd9941e14f3e1571b5331427d3b1faa6723571c88143afc902d980bd2"} Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.109435 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.109409377 podStartE2EDuration="12.109409377s" podCreationTimestamp="2026-02-26 11:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:03.108970425 +0000 UTC m=+1268.919796859" watchObservedRunningTime="2026-02-26 11:32:03.109409377 +0000 UTC m=+1268.920235811" Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.115338 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b6bf6c74-r47qt" event={"ID":"0876db8f-e235-40d9-b4a5-718097cdf02c","Type":"ContainerStarted","Data":"c3cf20e496184d423dd9676570affb3ed62ff3f5e0e800069d47d590effab24c"} Feb 26 11:32:03 crc kubenswrapper[4699]: I0226 11:32:03.134994 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-f49xd" podStartSLOduration=5.612793488 podStartE2EDuration="47.134970966s" podCreationTimestamp="2026-02-26 11:31:16 +0000 UTC" firstStartedPulling="2026-02-26 11:31:17.774246916 +0000 UTC m=+1223.585073350" lastFinishedPulling="2026-02-26 11:31:59.296424394 +0000 UTC m=+1265.107250828" observedRunningTime="2026-02-26 11:32:03.129833697 +0000 UTC m=+1268.940660131" watchObservedRunningTime="2026-02-26 11:32:03.134970966 +0000 UTC m=+1268.945797420" Feb 26 11:32:04 crc kubenswrapper[4699]: I0226 11:32:04.131766 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03f1bc3b-c587-4c47-bbc2-3dca2240d30c","Type":"ContainerStarted","Data":"9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2"} Feb 26 11:32:04 crc kubenswrapper[4699]: I0226 11:32:04.136684 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b6bf6c74-r47qt" event={"ID":"0876db8f-e235-40d9-b4a5-718097cdf02c","Type":"ContainerStarted","Data":"5fee23b2bd35e07b2bc23127d9ba51147df6b5de9840523d49b7247f51fcf676"} Feb 26 11:32:04 crc kubenswrapper[4699]: I0226 11:32:04.138256 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:32:04 crc kubenswrapper[4699]: I0226 11:32:04.138316 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:32:04 crc kubenswrapper[4699]: I0226 11:32:04.144924 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78f86c6bf8-r6wpf" event={"ID":"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee","Type":"ContainerStarted","Data":"1a317729338c17b4684891909a81baf465ca5e0314fc7b75c6f3742a26c946fe"} Feb 26 11:32:04 crc kubenswrapper[4699]: I0226 11:32:04.144988 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:04 crc kubenswrapper[4699]: I0226 11:32:04.145163 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:04 crc kubenswrapper[4699]: I0226 11:32:04.155996 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.155978851 podStartE2EDuration="8.155978851s" podCreationTimestamp="2026-02-26 11:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:04.155056014 +0000 UTC m=+1269.965882468" watchObservedRunningTime="2026-02-26 11:32:04.155978851 +0000 UTC m=+1269.966805305" Feb 26 11:32:04 crc kubenswrapper[4699]: I0226 11:32:04.184788 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-84b6bf6c74-r47qt" podStartSLOduration=8.184762962 podStartE2EDuration="8.184762962s" podCreationTimestamp="2026-02-26 11:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:04.179453269 +0000 UTC m=+1269.990279723" watchObservedRunningTime="2026-02-26 11:32:04.184762962 +0000 UTC m=+1269.995589406" Feb 26 11:32:04 crc kubenswrapper[4699]: I0226 11:32:04.212601 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-78f86c6bf8-r6wpf" podStartSLOduration=4.212577836 podStartE2EDuration="4.212577836s" podCreationTimestamp="2026-02-26 11:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:04.206057487 +0000 UTC m=+1270.016883921" watchObservedRunningTime="2026-02-26 11:32:04.212577836 +0000 UTC m=+1270.023404270" Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.170662 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" event={"ID":"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f","Type":"ContainerStarted","Data":"47aa6fcab7ba63e0059bde039291f7d09fed39c47d8ed3b4b011f2b39240d68f"} Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.180297 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535092-t7q4h" event={"ID":"343bb829-035d-4834-a0c4-d9a61c11a2ee","Type":"ContainerStarted","Data":"f2cdecc6eba8599d08f98abb877e3708c955cb03d406931c6fd1ea5f2ab28e98"} Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.191438 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" event={"ID":"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b","Type":"ContainerStarted","Data":"514b33745b4aa127708bf8765bb8617e15516309231f6f17906729d04d3d2a16"} Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.195163 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4878dd78-qpvzg" event={"ID":"b7700bd0-21d8-4b96-9753-2619443038a3","Type":"ContainerStarted","Data":"b52272c268905f2ef6a8812534bcc8bf8110d18f696daedaf2e094ce064ec7f6"} Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.196050 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.196074 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.206913 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535092-t7q4h" podStartSLOduration=2.250786603 podStartE2EDuration="5.20688932s" podCreationTimestamp="2026-02-26 11:32:00 +0000 UTC" firstStartedPulling="2026-02-26 11:32:01.553467765 +0000 UTC m=+1267.364294199" lastFinishedPulling="2026-02-26 11:32:04.509570482 +0000 UTC m=+1270.320396916" observedRunningTime="2026-02-26 11:32:05.197410456 +0000 UTC m=+1271.008236890" watchObservedRunningTime="2026-02-26 11:32:05.20688932 +0000 UTC m=+1271.017715754" Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.220421 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" event={"ID":"21ee9717-aaae-4511-9cee-fb022818e57d","Type":"ContainerStarted","Data":"306402b7645a267592b660f978f8685767bc49fa883947fdba6ed6fa1d54d19c"} Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.221236 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.234688 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d4878dd78-qpvzg" podStartSLOduration=5.2346687020000005 podStartE2EDuration="5.234668702s" podCreationTimestamp="2026-02-26 11:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:05.227239018 +0000 UTC m=+1271.038065462" watchObservedRunningTime="2026-02-26 11:32:05.234668702 +0000 UTC m=+1271.045495136" Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.240098 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c455f6f5b-f25td" event={"ID":"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4","Type":"ContainerStarted","Data":"a0d7c518107ce530bde8dc06ecc1543caeb752ad958b36e173be8e60f8d8a088"} Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.240149 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.241008 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.256219 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" podStartSLOduration=11.256200584 podStartE2EDuration="11.256200584s" podCreationTimestamp="2026-02-26 11:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:05.250422837 +0000 UTC m=+1271.061249271" watchObservedRunningTime="2026-02-26 11:32:05.256200584 +0000 UTC m=+1271.067027018" Feb 26 11:32:05 crc kubenswrapper[4699]: I0226 11:32:05.276151 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c455f6f5b-f25td" podStartSLOduration=11.27612967 podStartE2EDuration="11.27612967s" podCreationTimestamp="2026-02-26 11:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:05.27164094 +0000 UTC m=+1271.082467374" watchObservedRunningTime="2026-02-26 11:32:05.27612967 +0000 UTC m=+1271.086956104" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.252055 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" event={"ID":"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b","Type":"ContainerStarted","Data":"33718273cf0b85bce01d52282cadd465a6d877e40d664fb877a0e2590e81381a"} Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.254840 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" event={"ID":"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f","Type":"ContainerStarted","Data":"59e86688f7ad25464b86f65ac7156f4d78dbfbb25e41fcbc1ec58a4c8ed79739"} Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.257723 4699 generic.go:334] "Generic (PLEG): container finished" podID="343bb829-035d-4834-a0c4-d9a61c11a2ee" containerID="f2cdecc6eba8599d08f98abb877e3708c955cb03d406931c6fd1ea5f2ab28e98" exitCode=0 Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.257807 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535092-t7q4h" event={"ID":"343bb829-035d-4834-a0c4-d9a61c11a2ee","Type":"ContainerDied","Data":"f2cdecc6eba8599d08f98abb877e3708c955cb03d406931c6fd1ea5f2ab28e98"} Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.280806 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" podStartSLOduration=8.325056648 podStartE2EDuration="13.280785073s" podCreationTimestamp="2026-02-26 11:31:53 +0000 UTC" firstStartedPulling="2026-02-26 11:31:59.497661155 +0000 UTC m=+1265.308487589" lastFinishedPulling="2026-02-26 11:32:04.45338958 +0000 UTC m=+1270.264216014" observedRunningTime="2026-02-26 11:32:06.272194775 +0000 UTC m=+1272.083021229" watchObservedRunningTime="2026-02-26 11:32:06.280785073 +0000 UTC m=+1272.091611517" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.319673 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" podStartSLOduration=9.021380526 podStartE2EDuration="13.319650694s" podCreationTimestamp="2026-02-26 11:31:53 +0000 UTC" firstStartedPulling="2026-02-26 11:32:00.178350283 +0000 UTC m=+1265.989176727" lastFinishedPulling="2026-02-26 11:32:04.476620461 +0000 UTC m=+1270.287446895" observedRunningTime="2026-02-26 11:32:06.30566609 +0000 UTC m=+1272.116492524" watchObservedRunningTime="2026-02-26 11:32:06.319650694 +0000 UTC m=+1272.130477158" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.420879 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6596b66679-qmv4f"] Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.423488 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.473207 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6596b66679-qmv4f"] Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.510151 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5bb8c656f4-cl8tt"] Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.511771 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.536455 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bb8c656f4-cl8tt"] Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.556946 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edb59470-4038-48c2-a3ec-f3046406a971-logs\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.557004 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edb59470-4038-48c2-a3ec-f3046406a971-config-data-custom\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.557077 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edb59470-4038-48c2-a3ec-f3046406a971-config-data\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.557157 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb59470-4038-48c2-a3ec-f3046406a971-combined-ca-bundle\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.557285 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5bh4\" (UniqueName: \"kubernetes.io/projected/edb59470-4038-48c2-a3ec-f3046406a971-kube-api-access-r5bh4\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.590618 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7c455f6f5b-f25td"] Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.621940 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-977f89944-b96zk"] Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.625505 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.646067 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-977f89944-b96zk"] Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.659243 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5bh4\" (UniqueName: \"kubernetes.io/projected/edb59470-4038-48c2-a3ec-f3046406a971-kube-api-access-r5bh4\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.659322 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770f4ffe-352c-416b-8f67-a894c4107003-config-data\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.659351 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edb59470-4038-48c2-a3ec-f3046406a971-logs\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.659369 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edb59470-4038-48c2-a3ec-f3046406a971-config-data-custom\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.659413 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/770f4ffe-352c-416b-8f67-a894c4107003-config-data-custom\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.659435 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edb59470-4038-48c2-a3ec-f3046406a971-config-data\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.659451 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/770f4ffe-352c-416b-8f67-a894c4107003-logs\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.659472 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770f4ffe-352c-416b-8f67-a894c4107003-combined-ca-bundle\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.659503 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwsgj\" (UniqueName: \"kubernetes.io/projected/770f4ffe-352c-416b-8f67-a894c4107003-kube-api-access-zwsgj\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.659534 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb59470-4038-48c2-a3ec-f3046406a971-combined-ca-bundle\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.661963 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edb59470-4038-48c2-a3ec-f3046406a971-logs\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.676614 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edb59470-4038-48c2-a3ec-f3046406a971-config-data-custom\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.677238 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edb59470-4038-48c2-a3ec-f3046406a971-config-data\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.677849 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb59470-4038-48c2-a3ec-f3046406a971-combined-ca-bundle\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.695815 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5bh4\" (UniqueName: \"kubernetes.io/projected/edb59470-4038-48c2-a3ec-f3046406a971-kube-api-access-r5bh4\") pod \"barbican-worker-6596b66679-qmv4f\" (UID: \"edb59470-4038-48c2-a3ec-f3046406a971\") " pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.761445 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6596b66679-qmv4f" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.762874 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmxc\" (UniqueName: \"kubernetes.io/projected/dd004e01-9dac-4316-b6ee-05c1a0f20713-kube-api-access-spmxc\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.762957 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-combined-ca-bundle\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.762996 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770f4ffe-352c-416b-8f67-a894c4107003-config-data\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.763019 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-config-data\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.763057 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-internal-tls-certs\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.763293 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/770f4ffe-352c-416b-8f67-a894c4107003-config-data-custom\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.763358 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/770f4ffe-352c-416b-8f67-a894c4107003-logs\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.763390 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-config-data-custom\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.763415 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-public-tls-certs\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.763463 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770f4ffe-352c-416b-8f67-a894c4107003-combined-ca-bundle\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.763545 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwsgj\" (UniqueName: \"kubernetes.io/projected/770f4ffe-352c-416b-8f67-a894c4107003-kube-api-access-zwsgj\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.763851 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd004e01-9dac-4316-b6ee-05c1a0f20713-logs\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.764454 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/770f4ffe-352c-416b-8f67-a894c4107003-logs\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.773742 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770f4ffe-352c-416b-8f67-a894c4107003-config-data\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.776673 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/770f4ffe-352c-416b-8f67-a894c4107003-config-data-custom\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.781416 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwsgj\" (UniqueName: \"kubernetes.io/projected/770f4ffe-352c-416b-8f67-a894c4107003-kube-api-access-zwsgj\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.784822 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770f4ffe-352c-416b-8f67-a894c4107003-combined-ca-bundle\") pod \"barbican-keystone-listener-5bb8c656f4-cl8tt\" (UID: \"770f4ffe-352c-416b-8f67-a894c4107003\") " pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.835605 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.865674 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd004e01-9dac-4316-b6ee-05c1a0f20713-logs\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.865735 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spmxc\" (UniqueName: \"kubernetes.io/projected/dd004e01-9dac-4316-b6ee-05c1a0f20713-kube-api-access-spmxc\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.865787 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-combined-ca-bundle\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.865833 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-config-data\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.865868 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-internal-tls-certs\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.865927 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-config-data-custom\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.865948 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-public-tls-certs\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.866472 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd004e01-9dac-4316-b6ee-05c1a0f20713-logs\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.871352 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-combined-ca-bundle\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.874560 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-public-tls-certs\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.877096 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-config-data-custom\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.877873 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-internal-tls-certs\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.883232 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd004e01-9dac-4316-b6ee-05c1a0f20713-config-data\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.894829 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spmxc\" (UniqueName: \"kubernetes.io/projected/dd004e01-9dac-4316-b6ee-05c1a0f20713-kube-api-access-spmxc\") pod \"barbican-api-977f89944-b96zk\" (UID: \"dd004e01-9dac-4316-b6ee-05c1a0f20713\") " pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:06 crc kubenswrapper[4699]: I0226 11:32:06.951541 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:07 crc kubenswrapper[4699]: I0226 11:32:07.241628 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 11:32:07 crc kubenswrapper[4699]: I0226 11:32:07.242463 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 11:32:07 crc kubenswrapper[4699]: I0226 11:32:07.291514 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 11:32:07 crc kubenswrapper[4699]: I0226 11:32:07.307685 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 11:32:07 crc kubenswrapper[4699]: W0226 11:32:07.401196 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedb59470_4038_48c2_a3ec_f3046406a971.slice/crio-f9e9ed4f68ff5e841b905f88457d8ee5e1235b06031621cd281168141ba74c94 WatchSource:0}: Error finding container f9e9ed4f68ff5e841b905f88457d8ee5e1235b06031621cd281168141ba74c94: Status 404 returned error can't find the container with id f9e9ed4f68ff5e841b905f88457d8ee5e1235b06031621cd281168141ba74c94 Feb 26 11:32:07 crc kubenswrapper[4699]: I0226 11:32:07.406550 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6596b66679-qmv4f"] Feb 26 11:32:07 crc kubenswrapper[4699]: I0226 11:32:07.486840 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bb8c656f4-cl8tt"] Feb 26 11:32:07 crc kubenswrapper[4699]: W0226 11:32:07.502324 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod770f4ffe_352c_416b_8f67_a894c4107003.slice/crio-da0df9922567327a6a465c5335d645dd901f4e99046de86caad0e48c0c0c9aa5 WatchSource:0}: Error finding container da0df9922567327a6a465c5335d645dd901f4e99046de86caad0e48c0c0c9aa5: Status 404 returned error can't find the container with id da0df9922567327a6a465c5335d645dd901f4e99046de86caad0e48c0c0c9aa5 Feb 26 11:32:07 crc kubenswrapper[4699]: I0226 11:32:07.592240 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-977f89944-b96zk"] Feb 26 11:32:07 crc kubenswrapper[4699]: W0226 11:32:07.643698 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd004e01_9dac_4316_b6ee_05c1a0f20713.slice/crio-3140f5b1ce1501b49833062e2d5d01a117ba9b880f4453bd067f76646894c128 WatchSource:0}: Error finding container 3140f5b1ce1501b49833062e2d5d01a117ba9b880f4453bd067f76646894c128: Status 404 returned error can't find the container with id 3140f5b1ce1501b49833062e2d5d01a117ba9b880f4453bd067f76646894c128 Feb 26 11:32:07 crc kubenswrapper[4699]: I0226 11:32:07.802935 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535092-t7q4h" Feb 26 11:32:07 crc kubenswrapper[4699]: I0226 11:32:07.901627 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpjjs\" (UniqueName: \"kubernetes.io/projected/343bb829-035d-4834-a0c4-d9a61c11a2ee-kube-api-access-hpjjs\") pod \"343bb829-035d-4834-a0c4-d9a61c11a2ee\" (UID: \"343bb829-035d-4834-a0c4-d9a61c11a2ee\") " Feb 26 11:32:07 crc kubenswrapper[4699]: I0226 11:32:07.924690 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343bb829-035d-4834-a0c4-d9a61c11a2ee-kube-api-access-hpjjs" (OuterVolumeSpecName: "kube-api-access-hpjjs") pod "343bb829-035d-4834-a0c4-d9a61c11a2ee" (UID: "343bb829-035d-4834-a0c4-d9a61c11a2ee"). InnerVolumeSpecName "kube-api-access-hpjjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.004388 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpjjs\" (UniqueName: \"kubernetes.io/projected/343bb829-035d-4834-a0c4-d9a61c11a2ee-kube-api-access-hpjjs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.137462 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57899c756d-w9pc5" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.214023 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5795557cd8-dvzqq" podUID="15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.286519 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" event={"ID":"770f4ffe-352c-416b-8f67-a894c4107003","Type":"ContainerStarted","Data":"8922e19c5aab9ec1b1802ee35e765681c8e48a651add12222ada15ceab724d5d"} Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.286572 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" event={"ID":"770f4ffe-352c-416b-8f67-a894c4107003","Type":"ContainerStarted","Data":"da0df9922567327a6a465c5335d645dd901f4e99046de86caad0e48c0c0c9aa5"} Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.289917 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535092-t7q4h" event={"ID":"343bb829-035d-4834-a0c4-d9a61c11a2ee","Type":"ContainerDied","Data":"79a1129abbd02611508e0ac75a09840d716301f132a8b45d0ccf4b2b830b608a"} Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.289977 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79a1129abbd02611508e0ac75a09840d716301f132a8b45d0ccf4b2b830b608a" Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.290066 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535092-t7q4h" Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.298188 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-977f89944-b96zk" event={"ID":"dd004e01-9dac-4316-b6ee-05c1a0f20713","Type":"ContainerStarted","Data":"6c3e3ad06d91af3563256917e31187ffdce0e6ac43b19116b69ce425875fd7a8"} Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.298235 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-977f89944-b96zk" event={"ID":"dd004e01-9dac-4316-b6ee-05c1a0f20713","Type":"ContainerStarted","Data":"3140f5b1ce1501b49833062e2d5d01a117ba9b880f4453bd067f76646894c128"} Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.299421 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535086-jjp9j"] Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.300847 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6596b66679-qmv4f" event={"ID":"edb59470-4038-48c2-a3ec-f3046406a971","Type":"ContainerStarted","Data":"870168c7b1196015e731aed77dea2764205201ef5876695bad0a57f0beae9fd1"} Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.300913 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6596b66679-qmv4f" event={"ID":"edb59470-4038-48c2-a3ec-f3046406a971","Type":"ContainerStarted","Data":"f9e9ed4f68ff5e841b905f88457d8ee5e1235b06031621cd281168141ba74c94"} Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.300948 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api-log" containerID="cri-o://68ffeebcc8219b513faf07851f7ec0e29081e29acdefcbc0d3a8bcb52016ff06" gracePeriod=30 Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.301076 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api" containerID="cri-o://a0d7c518107ce530bde8dc06ecc1543caeb752ad958b36e173be8e60f8d8a088" gracePeriod=30 Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.301683 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.301913 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.306792 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Feb 26 11:32:08 crc kubenswrapper[4699]: I0226 11:32:08.323301 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535086-jjp9j"] Feb 26 11:32:09 crc kubenswrapper[4699]: I0226 11:32:09.310639 4699 generic.go:334] "Generic (PLEG): container finished" podID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerID="68ffeebcc8219b513faf07851f7ec0e29081e29acdefcbc0d3a8bcb52016ff06" exitCode=143 Feb 26 11:32:09 crc kubenswrapper[4699]: I0226 11:32:09.310739 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c455f6f5b-f25td" event={"ID":"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4","Type":"ContainerDied","Data":"68ffeebcc8219b513faf07851f7ec0e29081e29acdefcbc0d3a8bcb52016ff06"} Feb 26 11:32:09 crc kubenswrapper[4699]: I0226 11:32:09.554701 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:32:09 crc kubenswrapper[4699]: I0226 11:32:09.627430 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-nhzhh"] Feb 26 11:32:09 crc kubenswrapper[4699]: I0226 11:32:09.627728 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" podUID="81843e2c-774f-402a-bd90-c4485ab24c05" containerName="dnsmasq-dns" containerID="cri-o://3eda8514ede18fd03dc0849cf95cf8d9b4cb3f130429078ff465a976e2f5421b" gracePeriod=10 Feb 26 11:32:10 crc kubenswrapper[4699]: I0226 11:32:10.273627 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce3efa9-6f6f-4e81-a7a4-6249237a0d61" path="/var/lib/kubelet/pods/fce3efa9-6f6f-4e81-a7a4-6249237a0d61/volumes" Feb 26 11:32:10 crc kubenswrapper[4699]: I0226 11:32:10.323422 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" event={"ID":"770f4ffe-352c-416b-8f67-a894c4107003","Type":"ContainerStarted","Data":"a7080c86547fd8f7c7e97ac6d4432f041d526e65a75fc69008d1132326998b56"} Feb 26 11:32:10 crc kubenswrapper[4699]: I0226 11:32:10.331527 4699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:32:10 crc kubenswrapper[4699]: I0226 11:32:10.331553 4699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:32:10 crc kubenswrapper[4699]: I0226 11:32:10.331917 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-977f89944-b96zk" event={"ID":"dd004e01-9dac-4316-b6ee-05c1a0f20713","Type":"ContainerStarted","Data":"14f772681a296c450847b03d8e34f52d0bcba29f69abe121cf72db917752342c"} Feb 26 11:32:10 crc kubenswrapper[4699]: I0226 11:32:10.442211 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84b6bf6c74-r47qt" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:32:10 crc kubenswrapper[4699]: I0226 11:32:10.511227 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.048933 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.332170 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.387508 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6596b66679-qmv4f" event={"ID":"edb59470-4038-48c2-a3ec-f3046406a971","Type":"ContainerStarted","Data":"22902c6f0d47bcb2e5584cc068bb540826236ee2bb20b5249dd39ec46f56f698"} Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.401434 4699 generic.go:334] "Generic (PLEG): container finished" podID="81843e2c-774f-402a-bd90-c4485ab24c05" containerID="3eda8514ede18fd03dc0849cf95cf8d9b4cb3f130429078ff465a976e2f5421b" exitCode=0 Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.402633 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" event={"ID":"81843e2c-774f-402a-bd90-c4485ab24c05","Type":"ContainerDied","Data":"3eda8514ede18fd03dc0849cf95cf8d9b4cb3f130429078ff465a976e2f5421b"} Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.402687 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.402721 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.414643 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6596b66679-qmv4f" podStartSLOduration=5.41462396 podStartE2EDuration="5.41462396s" podCreationTimestamp="2026-02-26 11:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:11.40598046 +0000 UTC m=+1277.216806904" watchObservedRunningTime="2026-02-26 11:32:11.41462396 +0000 UTC m=+1277.225450394" Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.437913 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-977f89944-b96zk" podStartSLOduration=5.437893152 podStartE2EDuration="5.437893152s" podCreationTimestamp="2026-02-26 11:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:11.430317683 +0000 UTC m=+1277.241144117" watchObservedRunningTime="2026-02-26 11:32:11.437893152 +0000 UTC m=+1277.248719586" Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.463848 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5fd9f445b9-bnr2j"] Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.464100 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" podUID="4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" containerName="barbican-worker-log" containerID="cri-o://47aa6fcab7ba63e0059bde039291f7d09fed39c47d8ed3b4b011f2b39240d68f" gracePeriod=30 Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.464166 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" podUID="4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" containerName="barbican-worker" containerID="cri-o://59e86688f7ad25464b86f65ac7156f4d78dbfbb25e41fcbc1ec58a4c8ed79739" gracePeriod=30 Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.474045 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5bb8c656f4-cl8tt" podStartSLOduration=5.474025885 podStartE2EDuration="5.474025885s" podCreationTimestamp="2026-02-26 11:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:11.459237848 +0000 UTC m=+1277.270064302" watchObservedRunningTime="2026-02-26 11:32:11.474025885 +0000 UTC m=+1277.284852319" Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.508437 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-8dc77f9b6-7s844"] Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.508662 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" podUID="b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" containerName="barbican-keystone-listener-log" containerID="cri-o://514b33745b4aa127708bf8765bb8617e15516309231f6f17906729d04d3d2a16" gracePeriod=30 Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.508831 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" podUID="b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" containerName="barbican-keystone-listener" containerID="cri-o://33718273cf0b85bce01d52282cadd465a6d877e40d664fb877a0e2590e81381a" gracePeriod=30 Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.729690 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d4878dd78-qpvzg" Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.825305 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78f86c6bf8-r6wpf"] Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.825566 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-78f86c6bf8-r6wpf" podUID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerName="placement-log" containerID="cri-o://cfb24c179a421c25f4518e3a61ebbebf3cbb893957f33df4ff29a903e7099944" gracePeriod=30 Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.826359 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-78f86c6bf8-r6wpf" podUID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerName="placement-api" containerID="cri-o://1a317729338c17b4684891909a81baf465ca5e0314fc7b75c6f3742a26c946fe" gracePeriod=30 Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.875658 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-78f86c6bf8-r6wpf" podUID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.168:8778/\": read tcp 10.217.0.2:51950->10.217.0.168:8778: read: connection reset by peer" Feb 26 11:32:11 crc kubenswrapper[4699]: I0226 11:32:11.875996 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-78f86c6bf8-r6wpf" podUID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.168:8778/\": EOF" Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.202529 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.202577 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.329443 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.343591 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.429972 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" podUID="81843e2c-774f-402a-bd90-c4485ab24c05" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.436722 4699 generic.go:334] "Generic (PLEG): container finished" podID="b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" containerID="514b33745b4aa127708bf8765bb8617e15516309231f6f17906729d04d3d2a16" exitCode=143 Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.436805 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" event={"ID":"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b","Type":"ContainerDied","Data":"514b33745b4aa127708bf8765bb8617e15516309231f6f17906729d04d3d2a16"} Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.454433 4699 generic.go:334] "Generic (PLEG): container finished" podID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerID="cfb24c179a421c25f4518e3a61ebbebf3cbb893957f33df4ff29a903e7099944" exitCode=143 Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.454566 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78f86c6bf8-r6wpf" event={"ID":"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee","Type":"ContainerDied","Data":"cfb24c179a421c25f4518e3a61ebbebf3cbb893957f33df4ff29a903e7099944"} Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.461455 4699 generic.go:334] "Generic (PLEG): container finished" podID="8426fd89-9eba-46fa-8611-e98cc7636b41" containerID="2cec29afd9941e14f3e1571b5331427d3b1faa6723571c88143afc902d980bd2" exitCode=0 Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.461550 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f49xd" event={"ID":"8426fd89-9eba-46fa-8611-e98cc7636b41","Type":"ContainerDied","Data":"2cec29afd9941e14f3e1571b5331427d3b1faa6723571c88143afc902d980bd2"} Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.468860 4699 generic.go:334] "Generic (PLEG): container finished" podID="4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" containerID="59e86688f7ad25464b86f65ac7156f4d78dbfbb25e41fcbc1ec58a4c8ed79739" exitCode=0 Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.468894 4699 generic.go:334] "Generic (PLEG): container finished" podID="4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" containerID="47aa6fcab7ba63e0059bde039291f7d09fed39c47d8ed3b4b011f2b39240d68f" exitCode=143 Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.469714 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" event={"ID":"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f","Type":"ContainerDied","Data":"59e86688f7ad25464b86f65ac7156f4d78dbfbb25e41fcbc1ec58a4c8ed79739"} Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.469746 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" event={"ID":"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f","Type":"ContainerDied","Data":"47aa6fcab7ba63e0059bde039291f7d09fed39c47d8ed3b4b011f2b39240d68f"} Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.471809 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 11:32:12 crc kubenswrapper[4699]: I0226 11:32:12.471848 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 11:32:13 crc kubenswrapper[4699]: I0226 11:32:13.359262 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:32:13 crc kubenswrapper[4699]: I0226 11:32:13.722390 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:58256->10.217.0.162:9311: read: connection reset by peer" Feb 26 11:32:13 crc kubenswrapper[4699]: I0226 11:32:13.722448 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:58258->10.217.0.162:9311: read: connection reset by peer" Feb 26 11:32:14 crc kubenswrapper[4699]: I0226 11:32:14.328775 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:32:14 crc kubenswrapper[4699]: I0226 11:32:14.495311 4699 generic.go:334] "Generic (PLEG): container finished" podID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerID="a0d7c518107ce530bde8dc06ecc1543caeb752ad958b36e173be8e60f8d8a088" exitCode=0 Feb 26 11:32:14 crc kubenswrapper[4699]: I0226 11:32:14.495464 4699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:32:14 crc kubenswrapper[4699]: I0226 11:32:14.495476 4699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:32:14 crc kubenswrapper[4699]: I0226 11:32:14.496698 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c455f6f5b-f25td" event={"ID":"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4","Type":"ContainerDied","Data":"a0d7c518107ce530bde8dc06ecc1543caeb752ad958b36e173be8e60f8d8a088"} Feb 26 11:32:14 crc kubenswrapper[4699]: I0226 11:32:14.550640 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Feb 26 11:32:14 crc kubenswrapper[4699]: I0226 11:32:14.550809 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Feb 26 11:32:14 crc kubenswrapper[4699]: I0226 11:32:14.865476 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:32:15 crc kubenswrapper[4699]: I0226 11:32:15.068798 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 11:32:15 crc kubenswrapper[4699]: I0226 11:32:15.071053 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 11:32:15 crc kubenswrapper[4699]: I0226 11:32:15.661943 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:16 crc kubenswrapper[4699]: I0226 11:32:16.526791 4699 generic.go:334] "Generic (PLEG): container finished" podID="b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" containerID="33718273cf0b85bce01d52282cadd465a6d877e40d664fb877a0e2590e81381a" exitCode=0 Feb 26 11:32:16 crc kubenswrapper[4699]: I0226 11:32:16.527075 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" event={"ID":"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b","Type":"ContainerDied","Data":"33718273cf0b85bce01d52282cadd465a6d877e40d664fb877a0e2590e81381a"} Feb 26 11:32:17 crc kubenswrapper[4699]: I0226 11:32:17.423206 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" podUID="81843e2c-774f-402a-bd90-c4485ab24c05" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Feb 26 11:32:17 crc kubenswrapper[4699]: I0226 11:32:17.503342 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-977f89944-b96zk" Feb 26 11:32:17 crc kubenswrapper[4699]: I0226 11:32:17.542853 4699 generic.go:334] "Generic (PLEG): container finished" podID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerID="1a317729338c17b4684891909a81baf465ca5e0314fc7b75c6f3742a26c946fe" exitCode=0 Feb 26 11:32:17 crc kubenswrapper[4699]: I0226 11:32:17.543027 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78f86c6bf8-r6wpf" event={"ID":"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee","Type":"ContainerDied","Data":"1a317729338c17b4684891909a81baf465ca5e0314fc7b75c6f3742a26c946fe"} Feb 26 11:32:17 crc kubenswrapper[4699]: I0226 11:32:17.569756 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84b6bf6c74-r47qt"] Feb 26 11:32:17 crc kubenswrapper[4699]: I0226 11:32:17.569991 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84b6bf6c74-r47qt" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerName="barbican-api-log" containerID="cri-o://c3cf20e496184d423dd9676570affb3ed62ff3f5e0e800069d47d590effab24c" gracePeriod=30 Feb 26 11:32:17 crc kubenswrapper[4699]: I0226 11:32:17.570521 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84b6bf6c74-r47qt" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerName="barbican-api" containerID="cri-o://5fee23b2bd35e07b2bc23127d9ba51147df6b5de9840523d49b7247f51fcf676" gracePeriod=30 Feb 26 11:32:17 crc kubenswrapper[4699]: I0226 11:32:17.699246 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.107367 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6dc5565bbf-zgvcg"] Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.107654 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6dc5565bbf-zgvcg" podUID="73fd43db-ab24-441d-9912-881ef04d4572" containerName="neutron-api" containerID="cri-o://f1b43b05d45b05ac3c54d378fa118972d9e5848b345eada8b66bb2c67ea89c63" gracePeriod=30 Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.107776 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6dc5565bbf-zgvcg" podUID="73fd43db-ab24-441d-9912-881ef04d4572" containerName="neutron-httpd" containerID="cri-o://fb5409015c0850abe735cc049f283c49118298bd94a368b4191042b9fb38469f" gracePeriod=30 Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.122174 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6dc5565bbf-zgvcg" podUID="73fd43db-ab24-441d-9912-881ef04d4572" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": EOF" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.138184 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d45896d49-mh862"] Feb 26 11:32:18 crc kubenswrapper[4699]: E0226 11:32:18.138582 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343bb829-035d-4834-a0c4-d9a61c11a2ee" containerName="oc" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.138597 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="343bb829-035d-4834-a0c4-d9a61c11a2ee" containerName="oc" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.138782 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="343bb829-035d-4834-a0c4-d9a61c11a2ee" containerName="oc" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.139786 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.151520 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d45896d49-mh862"] Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.226621 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-internal-tls-certs\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.226684 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-config\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.226738 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-combined-ca-bundle\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.226764 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-ovndb-tls-certs\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.227067 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-public-tls-certs\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.227132 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-httpd-config\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.227405 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xpnq\" (UniqueName: \"kubernetes.io/projected/862cb546-78f8-4864-a158-9dc217ec2796-kube-api-access-7xpnq\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.329003 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-public-tls-certs\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.329078 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-httpd-config\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.329228 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xpnq\" (UniqueName: \"kubernetes.io/projected/862cb546-78f8-4864-a158-9dc217ec2796-kube-api-access-7xpnq\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.329327 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-internal-tls-certs\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.329375 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-config\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.329411 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-combined-ca-bundle\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.329471 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-ovndb-tls-certs\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.335145 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-public-tls-certs\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.335146 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-ovndb-tls-certs\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.335773 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-combined-ca-bundle\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.336374 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-config\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.336581 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-internal-tls-certs\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.344151 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/862cb546-78f8-4864-a158-9dc217ec2796-httpd-config\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.349491 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xpnq\" (UniqueName: \"kubernetes.io/projected/862cb546-78f8-4864-a158-9dc217ec2796-kube-api-access-7xpnq\") pod \"neutron-6d45896d49-mh862\" (UID: \"862cb546-78f8-4864-a158-9dc217ec2796\") " pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.462630 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.554390 4699 generic.go:334] "Generic (PLEG): container finished" podID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerID="c3cf20e496184d423dd9676570affb3ed62ff3f5e0e800069d47d590effab24c" exitCode=143 Feb 26 11:32:18 crc kubenswrapper[4699]: I0226 11:32:18.554675 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b6bf6c74-r47qt" event={"ID":"0876db8f-e235-40d9-b4a5-718097cdf02c","Type":"ContainerDied","Data":"c3cf20e496184d423dd9676570affb3ed62ff3f5e0e800069d47d590effab24c"} Feb 26 11:32:19 crc kubenswrapper[4699]: I0226 11:32:19.551489 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Feb 26 11:32:19 crc kubenswrapper[4699]: I0226 11:32:19.552381 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Feb 26 11:32:20 crc kubenswrapper[4699]: I0226 11:32:20.008885 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6dc5565bbf-zgvcg" podUID="73fd43db-ab24-441d-9912-881ef04d4572" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": dial tcp 10.217.0.158:9696: connect: connection refused" Feb 26 11:32:20 crc kubenswrapper[4699]: I0226 11:32:20.322608 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:32:20 crc kubenswrapper[4699]: I0226 11:32:20.335734 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:32:21 crc kubenswrapper[4699]: I0226 11:32:21.451346 4699 scope.go:117] "RemoveContainer" containerID="842f6cf352666ae13feda0b772e0ee74a200121a74a35bd2b4b96deac77bd6aa" Feb 26 11:32:21 crc kubenswrapper[4699]: I0226 11:32:21.584026 4699 generic.go:334] "Generic (PLEG): container finished" podID="73fd43db-ab24-441d-9912-881ef04d4572" containerID="fb5409015c0850abe735cc049f283c49118298bd94a368b4191042b9fb38469f" exitCode=0 Feb 26 11:32:21 crc kubenswrapper[4699]: I0226 11:32:21.584099 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc5565bbf-zgvcg" event={"ID":"73fd43db-ab24-441d-9912-881ef04d4572","Type":"ContainerDied","Data":"fb5409015c0850abe735cc049f283c49118298bd94a368b4191042b9fb38469f"} Feb 26 11:32:21 crc kubenswrapper[4699]: I0226 11:32:21.585828 4699 generic.go:334] "Generic (PLEG): container finished" podID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerID="5fee23b2bd35e07b2bc23127d9ba51147df6b5de9840523d49b7247f51fcf676" exitCode=0 Feb 26 11:32:21 crc kubenswrapper[4699]: I0226 11:32:21.585857 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b6bf6c74-r47qt" event={"ID":"0876db8f-e235-40d9-b4a5-718097cdf02c","Type":"ContainerDied","Data":"5fee23b2bd35e07b2bc23127d9ba51147df6b5de9840523d49b7247f51fcf676"} Feb 26 11:32:22 crc kubenswrapper[4699]: I0226 11:32:22.096080 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5795557cd8-dvzqq" Feb 26 11:32:22 crc kubenswrapper[4699]: I0226 11:32:22.164294 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:32:22 crc kubenswrapper[4699]: I0226 11:32:22.172754 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57899c756d-w9pc5"] Feb 26 11:32:22 crc kubenswrapper[4699]: I0226 11:32:22.348739 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84b6bf6c74-r47qt" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.165:9311/healthcheck\": dial tcp 10.217.0.165:9311: connect: connection refused" Feb 26 11:32:22 crc kubenswrapper[4699]: I0226 11:32:22.348817 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84b6bf6c74-r47qt" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.165:9311/healthcheck\": dial tcp 10.217.0.165:9311: connect: connection refused" Feb 26 11:32:22 crc kubenswrapper[4699]: I0226 11:32:22.423167 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" podUID="81843e2c-774f-402a-bd90-c4485ab24c05" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Feb 26 11:32:22 crc kubenswrapper[4699]: I0226 11:32:22.423309 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:32:22 crc kubenswrapper[4699]: I0226 11:32:22.593323 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-57899c756d-w9pc5" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon-log" containerID="cri-o://de9a25314ef41f7d3414b57dcaeec2a9add4d5ecb708b80dc9af27c79856ba9b" gracePeriod=30 Feb 26 11:32:22 crc kubenswrapper[4699]: I0226 11:32:22.593372 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-57899c756d-w9pc5" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon" containerID="cri-o://5570b961c7c2f73533bbe65fa87a9f8cc0b880e79add1f25b918377e32b9375d" gracePeriod=30 Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.803666 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f49xd" Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.931432 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-combined-ca-bundle\") pod \"8426fd89-9eba-46fa-8611-e98cc7636b41\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.931614 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-config-data\") pod \"8426fd89-9eba-46fa-8611-e98cc7636b41\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.931642 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-db-sync-config-data\") pod \"8426fd89-9eba-46fa-8611-e98cc7636b41\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.931678 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8426fd89-9eba-46fa-8611-e98cc7636b41-etc-machine-id\") pod \"8426fd89-9eba-46fa-8611-e98cc7636b41\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.931752 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-scripts\") pod \"8426fd89-9eba-46fa-8611-e98cc7636b41\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.931777 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr9sd\" (UniqueName: \"kubernetes.io/projected/8426fd89-9eba-46fa-8611-e98cc7636b41-kube-api-access-mr9sd\") pod \"8426fd89-9eba-46fa-8611-e98cc7636b41\" (UID: \"8426fd89-9eba-46fa-8611-e98cc7636b41\") " Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.932617 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8426fd89-9eba-46fa-8611-e98cc7636b41-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8426fd89-9eba-46fa-8611-e98cc7636b41" (UID: "8426fd89-9eba-46fa-8611-e98cc7636b41"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.937784 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-scripts" (OuterVolumeSpecName: "scripts") pod "8426fd89-9eba-46fa-8611-e98cc7636b41" (UID: "8426fd89-9eba-46fa-8611-e98cc7636b41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.937944 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8426fd89-9eba-46fa-8611-e98cc7636b41-kube-api-access-mr9sd" (OuterVolumeSpecName: "kube-api-access-mr9sd") pod "8426fd89-9eba-46fa-8611-e98cc7636b41" (UID: "8426fd89-9eba-46fa-8611-e98cc7636b41"). InnerVolumeSpecName "kube-api-access-mr9sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.955391 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8426fd89-9eba-46fa-8611-e98cc7636b41" (UID: "8426fd89-9eba-46fa-8611-e98cc7636b41"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.961543 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8426fd89-9eba-46fa-8611-e98cc7636b41" (UID: "8426fd89-9eba-46fa-8611-e98cc7636b41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:23 crc kubenswrapper[4699]: I0226 11:32:23.979732 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-config-data" (OuterVolumeSpecName: "config-data") pod "8426fd89-9eba-46fa-8611-e98cc7636b41" (UID: "8426fd89-9eba-46fa-8611-e98cc7636b41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:24 crc kubenswrapper[4699]: I0226 11:32:24.034079 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:24 crc kubenswrapper[4699]: I0226 11:32:24.034185 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr9sd\" (UniqueName: \"kubernetes.io/projected/8426fd89-9eba-46fa-8611-e98cc7636b41-kube-api-access-mr9sd\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:24 crc kubenswrapper[4699]: I0226 11:32:24.034201 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:24 crc kubenswrapper[4699]: I0226 11:32:24.034210 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:24 crc kubenswrapper[4699]: I0226 11:32:24.034220 4699 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8426fd89-9eba-46fa-8611-e98cc7636b41-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:24 crc kubenswrapper[4699]: I0226 11:32:24.034228 4699 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8426fd89-9eba-46fa-8611-e98cc7636b41-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:24 crc kubenswrapper[4699]: I0226 11:32:24.550845 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Feb 26 11:32:24 crc kubenswrapper[4699]: I0226 11:32:24.551009 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c455f6f5b-f25td" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Feb 26 11:32:24 crc kubenswrapper[4699]: I0226 11:32:24.611963 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f49xd" event={"ID":"8426fd89-9eba-46fa-8611-e98cc7636b41","Type":"ContainerDied","Data":"3e0a4f4a5840bf076a02406c3b220ed5f7a7941a35ea7875a55be88dc0efa11e"} Feb 26 11:32:24 crc kubenswrapper[4699]: I0226 11:32:24.612287 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e0a4f4a5840bf076a02406c3b220ed5f7a7941a35ea7875a55be88dc0efa11e" Feb 26 11:32:24 crc kubenswrapper[4699]: I0226 11:32:24.612013 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f49xd" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.100365 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:25 crc kubenswrapper[4699]: E0226 11:32:25.104603 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8426fd89-9eba-46fa-8611-e98cc7636b41" containerName="cinder-db-sync" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.104638 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="8426fd89-9eba-46fa-8611-e98cc7636b41" containerName="cinder-db-sync" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.104897 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="8426fd89-9eba-46fa-8611-e98cc7636b41" containerName="cinder-db-sync" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.106278 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.114133 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.114458 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.114470 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.114642 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bgvh2" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.114799 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.176997 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.177503 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9rw5\" (UniqueName: \"kubernetes.io/projected/81e6c561-d55c-48fa-94a9-2dd7d491fd48-kube-api-access-j9rw5\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.177690 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.177813 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-scripts\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.177926 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81e6c561-d55c-48fa-94a9-2dd7d491fd48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.178079 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.199746 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vlzrl"] Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.201647 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.228036 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vlzrl"] Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.282685 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.283690 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9rw5\" (UniqueName: \"kubernetes.io/projected/81e6c561-d55c-48fa-94a9-2dd7d491fd48-kube-api-access-j9rw5\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.283804 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.283889 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.283995 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x97ms\" (UniqueName: \"kubernetes.io/projected/9fa27ea0-52eb-406f-8256-68b4a471e452-kube-api-access-x97ms\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.284159 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.284250 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.284327 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-scripts\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.284409 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81e6c561-d55c-48fa-94a9-2dd7d491fd48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.284512 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81e6c561-d55c-48fa-94a9-2dd7d491fd48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.284548 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-config\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.284688 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.284776 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.297226 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.310855 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.319959 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.320188 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-scripts\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.332104 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9rw5\" (UniqueName: \"kubernetes.io/projected/81e6c561-d55c-48fa-94a9-2dd7d491fd48-kube-api-access-j9rw5\") pod \"cinder-scheduler-0\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.391732 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.391811 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.391852 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x97ms\" (UniqueName: \"kubernetes.io/projected/9fa27ea0-52eb-406f-8256-68b4a471e452-kube-api-access-x97ms\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.392057 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.392187 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-config\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.392271 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.394418 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.394465 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.395047 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.395143 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-config\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.397489 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.419225 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x97ms\" (UniqueName: \"kubernetes.io/projected/9fa27ea0-52eb-406f-8256-68b4a471e452-kube-api-access-x97ms\") pod \"dnsmasq-dns-6bb4fc677f-vlzrl\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.425326 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.430896 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.436479 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.438853 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.446713 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.495175 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/581ae159-48c4-4821-aede-361485304c59-logs\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.495534 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdwqp\" (UniqueName: \"kubernetes.io/projected/581ae159-48c4-4821-aede-361485304c59-kube-api-access-wdwqp\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.495582 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data-custom\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.495623 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.495681 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-scripts\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.495869 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/581ae159-48c4-4821-aede-361485304c59-etc-machine-id\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.495939 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.536728 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.605539 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/581ae159-48c4-4821-aede-361485304c59-logs\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.605594 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdwqp\" (UniqueName: \"kubernetes.io/projected/581ae159-48c4-4821-aede-361485304c59-kube-api-access-wdwqp\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.605619 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data-custom\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.605649 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.605681 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-scripts\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.605758 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/581ae159-48c4-4821-aede-361485304c59-etc-machine-id\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.605802 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.613171 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/581ae159-48c4-4821-aede-361485304c59-logs\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.613301 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/581ae159-48c4-4821-aede-361485304c59-etc-machine-id\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.617431 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data-custom\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.649978 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.658263 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.658382 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdwqp\" (UniqueName: \"kubernetes.io/projected/581ae159-48c4-4821-aede-361485304c59-kube-api-access-wdwqp\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.658703 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-scripts\") pod \"cinder-api-0\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " pod="openstack/cinder-api-0" Feb 26 11:32:25 crc kubenswrapper[4699]: I0226 11:32:25.822505 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:32:26 crc kubenswrapper[4699]: E0226 11:32:26.054797 4699 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78d85906_b78a_46eb_b5dd_4da95c1222d8.slice/crio-5570b961c7c2f73533bbe65fa87a9f8cc0b880e79add1f25b918377e32b9375d.scope\": RecentStats: unable to find data in memory cache]" Feb 26 11:32:26 crc kubenswrapper[4699]: I0226 11:32:26.644236 4699 generic.go:334] "Generic (PLEG): container finished" podID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerID="5570b961c7c2f73533bbe65fa87a9f8cc0b880e79add1f25b918377e32b9375d" exitCode=0 Feb 26 11:32:26 crc kubenswrapper[4699]: I0226 11:32:26.644291 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57899c756d-w9pc5" event={"ID":"78d85906-b78a-46eb-b5dd-4da95c1222d8","Type":"ContainerDied","Data":"5570b961c7c2f73533bbe65fa87a9f8cc0b880e79add1f25b918377e32b9375d"} Feb 26 11:32:27 crc kubenswrapper[4699]: E0226 11:32:27.022015 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 26 11:32:27 crc kubenswrapper[4699]: E0226 11:32:27.022398 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srl4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7cec2d73-9ca8-4a8b-836d-efce961fbde8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:32:27 crc kubenswrapper[4699]: E0226 11:32:27.023662 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.087307 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.090474 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149092 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-scripts\") pod \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149440 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-combined-ca-bundle\") pod \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149485 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-sb\") pod \"81843e2c-774f-402a-bd90-c4485ab24c05\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149509 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-config-data\") pod \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149547 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-internal-tls-certs\") pod \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149572 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2rj6\" (UniqueName: \"kubernetes.io/projected/81843e2c-774f-402a-bd90-c4485ab24c05-kube-api-access-z2rj6\") pod \"81843e2c-774f-402a-bd90-c4485ab24c05\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149589 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-logs\") pod \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149627 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-nb\") pod \"81843e2c-774f-402a-bd90-c4485ab24c05\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149656 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-swift-storage-0\") pod \"81843e2c-774f-402a-bd90-c4485ab24c05\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149677 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-config\") pod \"81843e2c-774f-402a-bd90-c4485ab24c05\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149734 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-svc\") pod \"81843e2c-774f-402a-bd90-c4485ab24c05\" (UID: \"81843e2c-774f-402a-bd90-c4485ab24c05\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149803 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv6sq\" (UniqueName: \"kubernetes.io/projected/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-kube-api-access-wv6sq\") pod \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.149823 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-public-tls-certs\") pod \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\" (UID: \"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.155261 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-logs" (OuterVolumeSpecName: "logs") pod "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" (UID: "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.162330 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-scripts" (OuterVolumeSpecName: "scripts") pod "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" (UID: "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.163005 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.163034 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.168830 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.169534 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.183565 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81843e2c-774f-402a-bd90-c4485ab24c05-kube-api-access-z2rj6" (OuterVolumeSpecName: "kube-api-access-z2rj6") pod "81843e2c-774f-402a-bd90-c4485ab24c05" (UID: "81843e2c-774f-402a-bd90-c4485ab24c05"). InnerVolumeSpecName "kube-api-access-z2rj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.183650 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-kube-api-access-wv6sq" (OuterVolumeSpecName: "kube-api-access-wv6sq") pod "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" (UID: "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee"). InnerVolumeSpecName "kube-api-access-wv6sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.243623 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.264324 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-logs\") pod \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.264430 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-logs\") pod \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.264492 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data-custom\") pod \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.264551 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data-custom\") pod \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.264586 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data\") pod \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.264653 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data\") pod \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.264689 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqdfx\" (UniqueName: \"kubernetes.io/projected/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-kube-api-access-sqdfx\") pod \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.264748 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-combined-ca-bundle\") pod \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\" (UID: \"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.264794 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb7t4\" (UniqueName: \"kubernetes.io/projected/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-kube-api-access-wb7t4\") pod \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.264798 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-logs" (OuterVolumeSpecName: "logs") pod "4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" (UID: "4a31ea66-afbf-4606-baa9-0f5fb98e5c4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.264888 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-combined-ca-bundle\") pod \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\" (UID: \"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.265567 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv6sq\" (UniqueName: \"kubernetes.io/projected/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-kube-api-access-wv6sq\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.265617 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2rj6\" (UniqueName: \"kubernetes.io/projected/81843e2c-774f-402a-bd90-c4485ab24c05-kube-api-access-z2rj6\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.265631 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.272846 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.273446 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-logs" (OuterVolumeSpecName: "logs") pod "2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" (UID: "2147a5bc-be0c-4ab4-a0ee-ede87002a1a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.290700 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "81843e2c-774f-402a-bd90-c4485ab24c05" (UID: "81843e2c-774f-402a-bd90-c4485ab24c05"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.295972 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" (UID: "4a31ea66-afbf-4606-baa9-0f5fb98e5c4f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.296821 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-kube-api-access-wb7t4" (OuterVolumeSpecName: "kube-api-access-wb7t4") pod "4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" (UID: "4a31ea66-afbf-4606-baa9-0f5fb98e5c4f"). InnerVolumeSpecName "kube-api-access-wb7t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.301558 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-kube-api-access-sqdfx" (OuterVolumeSpecName: "kube-api-access-sqdfx") pod "2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" (UID: "2147a5bc-be0c-4ab4-a0ee-ede87002a1a4"). InnerVolumeSpecName "kube-api-access-sqdfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.324698 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" (UID: "2147a5bc-be0c-4ab4-a0ee-ede87002a1a4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.333858 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81843e2c-774f-402a-bd90-c4485ab24c05" (UID: "81843e2c-774f-402a-bd90-c4485ab24c05"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.366974 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data-custom\") pod \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.367044 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data\") pod \"0876db8f-e235-40d9-b4a5-718097cdf02c\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.367143 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85rxv\" (UniqueName: \"kubernetes.io/projected/0876db8f-e235-40d9-b4a5-718097cdf02c-kube-api-access-85rxv\") pod \"0876db8f-e235-40d9-b4a5-718097cdf02c\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.367191 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzs9k\" (UniqueName: \"kubernetes.io/projected/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-kube-api-access-mzs9k\") pod \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.367244 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-combined-ca-bundle\") pod \"0876db8f-e235-40d9-b4a5-718097cdf02c\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.367317 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data-custom\") pod \"0876db8f-e235-40d9-b4a5-718097cdf02c\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.367355 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data\") pod \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.367450 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0876db8f-e235-40d9-b4a5-718097cdf02c-logs\") pod \"0876db8f-e235-40d9-b4a5-718097cdf02c\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.367521 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-public-tls-certs\") pod \"0876db8f-e235-40d9-b4a5-718097cdf02c\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.367558 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-internal-tls-certs\") pod \"0876db8f-e235-40d9-b4a5-718097cdf02c\" (UID: \"0876db8f-e235-40d9-b4a5-718097cdf02c\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.367607 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-logs\") pod \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.367648 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-combined-ca-bundle\") pod \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\" (UID: \"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b\") " Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.368289 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqdfx\" (UniqueName: \"kubernetes.io/projected/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-kube-api-access-sqdfx\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.368314 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb7t4\" (UniqueName: \"kubernetes.io/projected/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-kube-api-access-wb7t4\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.368326 4699 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.368337 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.368350 4699 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.368361 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.368372 4699 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.376925 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-logs" (OuterVolumeSpecName: "logs") pod "b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" (UID: "b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.377422 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0876db8f-e235-40d9-b4a5-718097cdf02c-logs" (OuterVolumeSpecName: "logs") pod "0876db8f-e235-40d9-b4a5-718097cdf02c" (UID: "0876db8f-e235-40d9-b4a5-718097cdf02c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.380948 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0876db8f-e235-40d9-b4a5-718097cdf02c" (UID: "0876db8f-e235-40d9-b4a5-718097cdf02c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.382286 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0876db8f-e235-40d9-b4a5-718097cdf02c-kube-api-access-85rxv" (OuterVolumeSpecName: "kube-api-access-85rxv") pod "0876db8f-e235-40d9-b4a5-718097cdf02c" (UID: "0876db8f-e235-40d9-b4a5-718097cdf02c"). InnerVolumeSpecName "kube-api-access-85rxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.391605 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" (UID: "b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.398240 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" (UID: "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.401269 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-kube-api-access-mzs9k" (OuterVolumeSpecName: "kube-api-access-mzs9k") pod "b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" (UID: "b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b"). InnerVolumeSpecName "kube-api-access-mzs9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.410481 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81843e2c-774f-402a-bd90-c4485ab24c05" (UID: "81843e2c-774f-402a-bd90-c4485ab24c05"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.473659 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.473697 4699 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.473708 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85rxv\" (UniqueName: \"kubernetes.io/projected/0876db8f-e235-40d9-b4a5-718097cdf02c-kube-api-access-85rxv\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.473719 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzs9k\" (UniqueName: \"kubernetes.io/projected/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-kube-api-access-mzs9k\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.473735 4699 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.473746 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.473756 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.473766 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0876db8f-e235-40d9-b4a5-718097cdf02c-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.480727 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.495521 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d45896d49-mh862"] Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.525437 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-config" (OuterVolumeSpecName: "config") pod "81843e2c-774f-402a-bd90-c4485ab24c05" (UID: "81843e2c-774f-402a-bd90-c4485ab24c05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.534191 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" (UID: "4a31ea66-afbf-4606-baa9-0f5fb98e5c4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.581720 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.581758 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.606720 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81843e2c-774f-402a-bd90-c4485ab24c05" (UID: "81843e2c-774f-402a-bd90-c4485ab24c05"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.613248 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" (UID: "2147a5bc-be0c-4ab4-a0ee-ede87002a1a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.618993 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data" (OuterVolumeSpecName: "config-data") pod "2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" (UID: "2147a5bc-be0c-4ab4-a0ee-ede87002a1a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.624995 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0876db8f-e235-40d9-b4a5-718097cdf02c" (UID: "0876db8f-e235-40d9-b4a5-718097cdf02c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.669613 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" event={"ID":"81843e2c-774f-402a-bd90-c4485ab24c05","Type":"ContainerDied","Data":"fed26d1422b55affaace34ac700e5a58aa1d192cab8a88f61c67c7cb3b1ca3ed"} Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.669667 4699 scope.go:117] "RemoveContainer" containerID="3eda8514ede18fd03dc0849cf95cf8d9b4cb3f130429078ff465a976e2f5421b" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.669805 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-nhzhh" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.677368 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0876db8f-e235-40d9-b4a5-718097cdf02c" (UID: "0876db8f-e235-40d9-b4a5-718097cdf02c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.685881 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81843e2c-774f-402a-bd90-c4485ab24c05-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.685907 4699 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.685920 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.685932 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.685943 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.695060 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vlzrl"] Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.699952 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c455f6f5b-f25td" event={"ID":"2147a5bc-be0c-4ab4-a0ee-ede87002a1a4","Type":"ContainerDied","Data":"1ea84a2c17c70c4722d76da041934ea3f75af2c65494a5778df946ebb8677371"} Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.700089 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c455f6f5b-f25td" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.711734 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.711899 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" (UID: "b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.712932 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78f86c6bf8-r6wpf" event={"ID":"e9ef11cc-2a83-4f0e-b117-4be10a1c0fee","Type":"ContainerDied","Data":"086804bba8040fba8ead2adc36df764be92ea222ee4962825dd9a4df869adac5"} Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.713006 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78f86c6bf8-r6wpf" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.717842 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" event={"ID":"4a31ea66-afbf-4606-baa9-0f5fb98e5c4f","Type":"ContainerDied","Data":"67791da9269463758e09bb6a9c9c2f13b834b1a262a1121df8a5fa0b5b6170cf"} Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.717980 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-config-data" (OuterVolumeSpecName: "config-data") pod "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" (UID: "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.719238 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd9f445b9-bnr2j" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.727709 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d45896d49-mh862" event={"ID":"862cb546-78f8-4864-a158-9dc217ec2796","Type":"ContainerStarted","Data":"29cc7d2eee99e53136d941d8237d18ae89ab2c4497f23c739b9b2ae06d0c1d8c"} Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.730585 4699 generic.go:334] "Generic (PLEG): container finished" podID="73fd43db-ab24-441d-9912-881ef04d4572" containerID="f1b43b05d45b05ac3c54d378fa118972d9e5848b345eada8b66bb2c67ea89c63" exitCode=0 Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.730763 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc5565bbf-zgvcg" event={"ID":"73fd43db-ab24-441d-9912-881ef04d4572","Type":"ContainerDied","Data":"f1b43b05d45b05ac3c54d378fa118972d9e5848b345eada8b66bb2c67ea89c63"} Feb 26 11:32:27 crc kubenswrapper[4699]: W0226 11:32:27.735266 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod581ae159_48c4_4821_aede_361485304c59.slice/crio-682d7ab4556bf77bd10ad1dbcd5f0a84777dbfdd65cbaf81b868443ac2be23e3 WatchSource:0}: Error finding container 682d7ab4556bf77bd10ad1dbcd5f0a84777dbfdd65cbaf81b868443ac2be23e3: Status 404 returned error can't find the container with id 682d7ab4556bf77bd10ad1dbcd5f0a84777dbfdd65cbaf81b868443ac2be23e3 Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.737259 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.737935 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84b6bf6c74-r47qt" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.738015 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b6bf6c74-r47qt" event={"ID":"0876db8f-e235-40d9-b4a5-718097cdf02c","Type":"ContainerDied","Data":"079bbabce73c111db6093e96198997a034c6927d448d649260507e6ce83573d4"} Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.751415 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data" (OuterVolumeSpecName: "config-data") pod "b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" (UID: "b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.760378 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" containerName="ceilometer-notification-agent" containerID="cri-o://2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d" gracePeriod=30 Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.760607 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.760704 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" containerName="sg-core" containerID="cri-o://5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2" gracePeriod=30 Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.760833 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8dc77f9b6-7s844" event={"ID":"b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b","Type":"ContainerDied","Data":"64d085c2e0471990e9f05ef5274018eb074bf0ab7cec6ddaf7afcafa1dae6331"} Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.767471 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data" (OuterVolumeSpecName: "config-data") pod "4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" (UID: "4a31ea66-afbf-4606-baa9-0f5fb98e5c4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.768473 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-nhzhh"] Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.776100 4699 scope.go:117] "RemoveContainer" containerID="2161a9d96d5b3712e81eaf624a88f2f6f3ee6fc2f0aaa102d1a1b03d768333c4" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.787280 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.794333 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.810049 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.810201 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.812663 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" (UID: "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.830190 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-nhzhh"] Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.843962 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0876db8f-e235-40d9-b4a5-718097cdf02c" (UID: "0876db8f-e235-40d9-b4a5-718097cdf02c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.848511 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7c455f6f5b-f25td"] Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.865361 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7c455f6f5b-f25td"] Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.871860 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data" (OuterVolumeSpecName: "config-data") pod "0876db8f-e235-40d9-b4a5-718097cdf02c" (UID: "0876db8f-e235-40d9-b4a5-718097cdf02c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.896200 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" (UID: "e9ef11cc-2a83-4f0e-b117-4be10a1c0fee"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.913176 4699 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.913207 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0876db8f-e235-40d9-b4a5-718097cdf02c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.913216 4699 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:27 crc kubenswrapper[4699]: I0226 11:32:27.913226 4699 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.080811 4699 scope.go:117] "RemoveContainer" containerID="a0d7c518107ce530bde8dc06ecc1543caeb752ad958b36e173be8e60f8d8a088" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.131973 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-57899c756d-w9pc5" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.178216 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5fd9f445b9-bnr2j"] Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.196799 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.203368 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5fd9f445b9-bnr2j"] Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.235994 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84b6bf6c74-r47qt"] Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.249238 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-84b6bf6c74-r47qt"] Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.273803 4699 scope.go:117] "RemoveContainer" containerID="68ffeebcc8219b513faf07851f7ec0e29081e29acdefcbc0d3a8bcb52016ff06" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.287732 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" path="/var/lib/kubelet/pods/0876db8f-e235-40d9-b4a5-718097cdf02c/volumes" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.288474 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" path="/var/lib/kubelet/pods/2147a5bc-be0c-4ab4-a0ee-ede87002a1a4/volumes" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.289920 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" path="/var/lib/kubelet/pods/4a31ea66-afbf-4606-baa9-0f5fb98e5c4f/volumes" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.299077 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81843e2c-774f-402a-bd90-c4485ab24c05" path="/var/lib/kubelet/pods/81843e2c-774f-402a-bd90-c4485ab24c05/volumes" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.329796 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-config\") pod \"73fd43db-ab24-441d-9912-881ef04d4572\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.330545 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-internal-tls-certs\") pod \"73fd43db-ab24-441d-9912-881ef04d4572\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.330722 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g2x8\" (UniqueName: \"kubernetes.io/projected/73fd43db-ab24-441d-9912-881ef04d4572-kube-api-access-6g2x8\") pod \"73fd43db-ab24-441d-9912-881ef04d4572\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.330930 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-httpd-config\") pod \"73fd43db-ab24-441d-9912-881ef04d4572\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.331047 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-public-tls-certs\") pod \"73fd43db-ab24-441d-9912-881ef04d4572\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.331332 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-combined-ca-bundle\") pod \"73fd43db-ab24-441d-9912-881ef04d4572\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.331491 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-ovndb-tls-certs\") pod \"73fd43db-ab24-441d-9912-881ef04d4572\" (UID: \"73fd43db-ab24-441d-9912-881ef04d4572\") " Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.346712 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78f86c6bf8-r6wpf"] Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.346747 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-78f86c6bf8-r6wpf"] Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.346763 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-8dc77f9b6-7s844"] Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.346774 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-8dc77f9b6-7s844"] Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.347170 4699 scope.go:117] "RemoveContainer" containerID="1a317729338c17b4684891909a81baf465ca5e0314fc7b75c6f3742a26c946fe" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.349946 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "73fd43db-ab24-441d-9912-881ef04d4572" (UID: "73fd43db-ab24-441d-9912-881ef04d4572"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.372144 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73fd43db-ab24-441d-9912-881ef04d4572-kube-api-access-6g2x8" (OuterVolumeSpecName: "kube-api-access-6g2x8") pod "73fd43db-ab24-441d-9912-881ef04d4572" (UID: "73fd43db-ab24-441d-9912-881ef04d4572"). InnerVolumeSpecName "kube-api-access-6g2x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.449769 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g2x8\" (UniqueName: \"kubernetes.io/projected/73fd43db-ab24-441d-9912-881ef04d4572-kube-api-access-6g2x8\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.449801 4699 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.456288 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-config" (OuterVolumeSpecName: "config") pod "73fd43db-ab24-441d-9912-881ef04d4572" (UID: "73fd43db-ab24-441d-9912-881ef04d4572"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.465793 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "73fd43db-ab24-441d-9912-881ef04d4572" (UID: "73fd43db-ab24-441d-9912-881ef04d4572"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.501270 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73fd43db-ab24-441d-9912-881ef04d4572" (UID: "73fd43db-ab24-441d-9912-881ef04d4572"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.509316 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "73fd43db-ab24-441d-9912-881ef04d4572" (UID: "73fd43db-ab24-441d-9912-881ef04d4572"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.514736 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "73fd43db-ab24-441d-9912-881ef04d4572" (UID: "73fd43db-ab24-441d-9912-881ef04d4572"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.547549 4699 scope.go:117] "RemoveContainer" containerID="cfb24c179a421c25f4518e3a61ebbebf3cbb893957f33df4ff29a903e7099944" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.551679 4699 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.551709 4699 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.551718 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.551728 4699 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.551737 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/73fd43db-ab24-441d-9912-881ef04d4572-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.621393 4699 scope.go:117] "RemoveContainer" containerID="59e86688f7ad25464b86f65ac7156f4d78dbfbb25e41fcbc1ec58a4c8ed79739" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.658535 4699 scope.go:117] "RemoveContainer" containerID="47aa6fcab7ba63e0059bde039291f7d09fed39c47d8ed3b4b011f2b39240d68f" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.688875 4699 scope.go:117] "RemoveContainer" containerID="5fee23b2bd35e07b2bc23127d9ba51147df6b5de9840523d49b7247f51fcf676" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.744692 4699 scope.go:117] "RemoveContainer" containerID="c3cf20e496184d423dd9676570affb3ed62ff3f5e0e800069d47d590effab24c" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.797842 4699 scope.go:117] "RemoveContainer" containerID="33718273cf0b85bce01d52282cadd465a6d877e40d664fb877a0e2590e81381a" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.806649 4699 generic.go:334] "Generic (PLEG): container finished" podID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" containerID="5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2" exitCode=2 Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.806718 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cec2d73-9ca8-4a8b-836d-efce961fbde8","Type":"ContainerDied","Data":"5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2"} Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.808099 4699 generic.go:334] "Generic (PLEG): container finished" podID="9fa27ea0-52eb-406f-8256-68b4a471e452" containerID="c85028e72c3a0e7a111082381d8b68740bba7121da4a177d5d4e87fed9daf888" exitCode=0 Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.808169 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" event={"ID":"9fa27ea0-52eb-406f-8256-68b4a471e452","Type":"ContainerDied","Data":"c85028e72c3a0e7a111082381d8b68740bba7121da4a177d5d4e87fed9daf888"} Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.808186 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" event={"ID":"9fa27ea0-52eb-406f-8256-68b4a471e452","Type":"ContainerStarted","Data":"096662e32232c28cf3046778c91211f7c3482d79260670ba5c8b5347692e739f"} Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.834323 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81e6c561-d55c-48fa-94a9-2dd7d491fd48","Type":"ContainerStarted","Data":"48e983e75dfbee9e41159572aae0afa12ee51c7366ffabb530747e91bb647659"} Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.844362 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d45896d49-mh862" event={"ID":"862cb546-78f8-4864-a158-9dc217ec2796","Type":"ContainerStarted","Data":"9a59f11c6499b61ad8c0a8b993bd48cbdbc71b6e77f5dc55cc125d07caa3624c"} Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.845417 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.882402 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d45896d49-mh862" podStartSLOduration=10.88237759 podStartE2EDuration="10.88237759s" podCreationTimestamp="2026-02-26 11:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:28.874912563 +0000 UTC m=+1294.685739007" watchObservedRunningTime="2026-02-26 11:32:28.88237759 +0000 UTC m=+1294.693204024" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.884502 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc5565bbf-zgvcg" event={"ID":"73fd43db-ab24-441d-9912-881ef04d4572","Type":"ContainerDied","Data":"31b648d87b25df09b072d95e938824b9e321e65ded6b88d9eed7727a038a5155"} Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.884851 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc5565bbf-zgvcg" Feb 26 11:32:28 crc kubenswrapper[4699]: I0226 11:32:28.925368 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"581ae159-48c4-4821-aede-361485304c59","Type":"ContainerStarted","Data":"682d7ab4556bf77bd10ad1dbcd5f0a84777dbfdd65cbaf81b868443ac2be23e3"} Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.063652 4699 scope.go:117] "RemoveContainer" containerID="514b33745b4aa127708bf8765bb8617e15516309231f6f17906729d04d3d2a16" Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.098287 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6dc5565bbf-zgvcg"] Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.105282 4699 scope.go:117] "RemoveContainer" containerID="fb5409015c0850abe735cc049f283c49118298bd94a368b4191042b9fb38469f" Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.108006 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6dc5565bbf-zgvcg"] Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.182062 4699 scope.go:117] "RemoveContainer" containerID="f1b43b05d45b05ac3c54d378fa118972d9e5848b345eada8b66bb2c67ea89c63" Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.960513 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"581ae159-48c4-4821-aede-361485304c59","Type":"ContainerStarted","Data":"9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380"} Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.961170 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"581ae159-48c4-4821-aede-361485304c59","Type":"ContainerStarted","Data":"40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8"} Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.960607 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="581ae159-48c4-4821-aede-361485304c59" containerName="cinder-api-log" containerID="cri-o://9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380" gracePeriod=30 Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.961223 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.960621 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="581ae159-48c4-4821-aede-361485304c59" containerName="cinder-api" containerID="cri-o://40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8" gracePeriod=30 Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.970957 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" event={"ID":"9fa27ea0-52eb-406f-8256-68b4a471e452","Type":"ContainerStarted","Data":"b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21"} Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.971320 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.975361 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81e6c561-d55c-48fa-94a9-2dd7d491fd48","Type":"ContainerStarted","Data":"22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981"} Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.981083 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d45896d49-mh862" event={"ID":"862cb546-78f8-4864-a158-9dc217ec2796","Type":"ContainerStarted","Data":"97ddf93d5e2850bacd26d60c3eae5e72a0817d976e7bfe9b76f973f92ca9f570"} Feb 26 11:32:29 crc kubenswrapper[4699]: I0226 11:32:29.995233 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.995207818 podStartE2EDuration="4.995207818s" podCreationTimestamp="2026-02-26 11:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:29.979018327 +0000 UTC m=+1295.789844781" watchObservedRunningTime="2026-02-26 11:32:29.995207818 +0000 UTC m=+1295.806034272" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.004082 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" podStartSLOduration=5.004064095 podStartE2EDuration="5.004064095s" podCreationTimestamp="2026-02-26 11:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:30.002458668 +0000 UTC m=+1295.813285112" watchObservedRunningTime="2026-02-26 11:32:30.004064095 +0000 UTC m=+1295.814890529" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.274836 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73fd43db-ab24-441d-9912-881ef04d4572" path="/var/lib/kubelet/pods/73fd43db-ab24-441d-9912-881ef04d4572/volumes" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.276071 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" path="/var/lib/kubelet/pods/b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b/volumes" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.277028 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" path="/var/lib/kubelet/pods/e9ef11cc-2a83-4f0e-b117-4be10a1c0fee/volumes" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.803007 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.906987 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdwqp\" (UniqueName: \"kubernetes.io/projected/581ae159-48c4-4821-aede-361485304c59-kube-api-access-wdwqp\") pod \"581ae159-48c4-4821-aede-361485304c59\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.907024 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data\") pod \"581ae159-48c4-4821-aede-361485304c59\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.907135 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data-custom\") pod \"581ae159-48c4-4821-aede-361485304c59\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.907154 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-scripts\") pod \"581ae159-48c4-4821-aede-361485304c59\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.907172 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/581ae159-48c4-4821-aede-361485304c59-etc-machine-id\") pod \"581ae159-48c4-4821-aede-361485304c59\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.907248 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/581ae159-48c4-4821-aede-361485304c59-logs\") pod \"581ae159-48c4-4821-aede-361485304c59\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.907330 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-combined-ca-bundle\") pod \"581ae159-48c4-4821-aede-361485304c59\" (UID: \"581ae159-48c4-4821-aede-361485304c59\") " Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.912784 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/581ae159-48c4-4821-aede-361485304c59-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "581ae159-48c4-4821-aede-361485304c59" (UID: "581ae159-48c4-4821-aede-361485304c59"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.913185 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-scripts" (OuterVolumeSpecName: "scripts") pod "581ae159-48c4-4821-aede-361485304c59" (UID: "581ae159-48c4-4821-aede-361485304c59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.914283 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581ae159-48c4-4821-aede-361485304c59-kube-api-access-wdwqp" (OuterVolumeSpecName: "kube-api-access-wdwqp") pod "581ae159-48c4-4821-aede-361485304c59" (UID: "581ae159-48c4-4821-aede-361485304c59"). InnerVolumeSpecName "kube-api-access-wdwqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.914927 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/581ae159-48c4-4821-aede-361485304c59-logs" (OuterVolumeSpecName: "logs") pod "581ae159-48c4-4821-aede-361485304c59" (UID: "581ae159-48c4-4821-aede-361485304c59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.935161 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "581ae159-48c4-4821-aede-361485304c59" (UID: "581ae159-48c4-4821-aede-361485304c59"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.963207 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "581ae159-48c4-4821-aede-361485304c59" (UID: "581ae159-48c4-4821-aede-361485304c59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.978812 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data" (OuterVolumeSpecName: "config-data") pod "581ae159-48c4-4821-aede-361485304c59" (UID: "581ae159-48c4-4821-aede-361485304c59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.993155 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81e6c561-d55c-48fa-94a9-2dd7d491fd48","Type":"ContainerStarted","Data":"85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78"} Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.996635 4699 generic.go:334] "Generic (PLEG): container finished" podID="581ae159-48c4-4821-aede-361485304c59" containerID="40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8" exitCode=0 Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.996665 4699 generic.go:334] "Generic (PLEG): container finished" podID="581ae159-48c4-4821-aede-361485304c59" containerID="9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380" exitCode=143 Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.996686 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.996719 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"581ae159-48c4-4821-aede-361485304c59","Type":"ContainerDied","Data":"40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8"} Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.996766 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"581ae159-48c4-4821-aede-361485304c59","Type":"ContainerDied","Data":"9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380"} Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.996779 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"581ae159-48c4-4821-aede-361485304c59","Type":"ContainerDied","Data":"682d7ab4556bf77bd10ad1dbcd5f0a84777dbfdd65cbaf81b868443ac2be23e3"} Feb 26 11:32:30 crc kubenswrapper[4699]: I0226 11:32:30.996801 4699 scope.go:117] "RemoveContainer" containerID="40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.009386 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.009640 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdwqp\" (UniqueName: \"kubernetes.io/projected/581ae159-48c4-4821-aede-361485304c59-kube-api-access-wdwqp\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.009706 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.009908 4699 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.010019 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/581ae159-48c4-4821-aede-361485304c59-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.010094 4699 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/581ae159-48c4-4821-aede-361485304c59-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.010213 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/581ae159-48c4-4821-aede-361485304c59-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.094446 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.214015002 podStartE2EDuration="6.09442442s" podCreationTimestamp="2026-02-26 11:32:25 +0000 UTC" firstStartedPulling="2026-02-26 11:32:27.778289586 +0000 UTC m=+1293.589116020" lastFinishedPulling="2026-02-26 11:32:28.658699004 +0000 UTC m=+1294.469525438" observedRunningTime="2026-02-26 11:32:31.01728008 +0000 UTC m=+1296.828106534" watchObservedRunningTime="2026-02-26 11:32:31.09442442 +0000 UTC m=+1296.905250864" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.095370 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.107533 4699 scope.go:117] "RemoveContainer" containerID="9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.121023 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.150917 4699 scope.go:117] "RemoveContainer" containerID="40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.152442 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8\": container with ID starting with 40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8 not found: ID does not exist" containerID="40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.152565 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8"} err="failed to get container status \"40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8\": rpc error: code = NotFound desc = could not find container \"40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8\": container with ID starting with 40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8 not found: ID does not exist" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.152672 4699 scope.go:117] "RemoveContainer" containerID="9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.154580 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155225 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fd43db-ab24-441d-9912-881ef04d4572" containerName="neutron-api" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155249 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fd43db-ab24-441d-9912-881ef04d4572" containerName="neutron-api" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155306 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81843e2c-774f-402a-bd90-c4485ab24c05" containerName="init" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155318 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="81843e2c-774f-402a-bd90-c4485ab24c05" containerName="init" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155329 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" containerName="barbican-worker" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155339 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" containerName="barbican-worker" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155380 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerName="placement-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155392 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerName="placement-log" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155406 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581ae159-48c4-4821-aede-361485304c59" containerName="cinder-api-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155416 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="581ae159-48c4-4821-aede-361485304c59" containerName="cinder-api-log" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155428 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerName="barbican-api-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155436 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerName="barbican-api-log" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155487 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" containerName="barbican-keystone-listener-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155498 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" containerName="barbican-keystone-listener-log" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155518 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerName="barbican-api" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155552 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerName="barbican-api" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155573 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155582 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api-log" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155595 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerName="placement-api" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155602 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerName="placement-api" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155643 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" containerName="barbican-worker-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155655 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" containerName="barbican-worker-log" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155672 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" containerName="barbican-keystone-listener" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155680 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" containerName="barbican-keystone-listener" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155912 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81843e2c-774f-402a-bd90-c4485ab24c05" containerName="dnsmasq-dns" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155924 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="81843e2c-774f-402a-bd90-c4485ab24c05" containerName="dnsmasq-dns" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155942 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fd43db-ab24-441d-9912-881ef04d4572" containerName="neutron-httpd" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.155952 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fd43db-ab24-441d-9912-881ef04d4572" containerName="neutron-httpd" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.155992 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581ae159-48c4-4821-aede-361485304c59" containerName="cinder-api" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156002 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="581ae159-48c4-4821-aede-361485304c59" containerName="cinder-api" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.156014 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156022 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156353 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156402 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerName="barbican-api-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156418 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fd43db-ab24-441d-9912-881ef04d4572" containerName="neutron-httpd" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156429 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" containerName="barbican-worker" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156443 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fd43db-ab24-441d-9912-881ef04d4572" containerName="neutron-api" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156567 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="2147a5bc-be0c-4ab4-a0ee-ede87002a1a4" containerName="barbican-api-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156583 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" containerName="barbican-keystone-listener" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156597 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a31ea66-afbf-4606-baa9-0f5fb98e5c4f" containerName="barbican-worker-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156611 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerName="placement-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156739 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="81843e2c-774f-402a-bd90-c4485ab24c05" containerName="dnsmasq-dns" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156752 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="581ae159-48c4-4821-aede-361485304c59" containerName="cinder-api-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156766 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="581ae159-48c4-4821-aede-361485304c59" containerName="cinder-api" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156904 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ef11cc-2a83-4f0e-b117-4be10a1c0fee" containerName="placement-api" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156920 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e2bd7a-1476-4dd9-8765-0c1bbbd7dd8b" containerName="barbican-keystone-listener-log" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.156936 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="0876db8f-e235-40d9-b4a5-718097cdf02c" containerName="barbican-api" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.159792 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: E0226 11:32:31.160671 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380\": container with ID starting with 9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380 not found: ID does not exist" containerID="9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.160708 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380"} err="failed to get container status \"9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380\": rpc error: code = NotFound desc = could not find container \"9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380\": container with ID starting with 9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380 not found: ID does not exist" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.160734 4699 scope.go:117] "RemoveContainer" containerID="40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.161141 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8"} err="failed to get container status \"40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8\": rpc error: code = NotFound desc = could not find container \"40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8\": container with ID starting with 40142f357c043763beb1bb0a8f6b18fb7ff06e2c964dea2c67cd27ad4975a2b8 not found: ID does not exist" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.161159 4699 scope.go:117] "RemoveContainer" containerID="9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.161484 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380"} err="failed to get container status \"9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380\": rpc error: code = NotFound desc = could not find container \"9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380\": container with ID starting with 9e86abf10ae647fbec08f92496385017210d4ff2c918b5e4390bc7246666f380 not found: ID does not exist" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.162092 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.163450 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.167217 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.184782 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.320319 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-config-data\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.320384 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2c2d2c1-e68e-4b14-a732-3b42a6132503-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.320425 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.320465 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.320487 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.320527 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c2d2c1-e68e-4b14-a732-3b42a6132503-logs\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.320593 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5f6r\" (UniqueName: \"kubernetes.io/projected/c2c2d2c1-e68e-4b14-a732-3b42a6132503-kube-api-access-l5f6r\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.320656 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-scripts\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.320688 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-config-data-custom\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.422726 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2c2d2c1-e68e-4b14-a732-3b42a6132503-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.423064 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.423260 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.423363 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.423504 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c2d2c1-e68e-4b14-a732-3b42a6132503-logs\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.422866 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2c2d2c1-e68e-4b14-a732-3b42a6132503-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.424234 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5f6r\" (UniqueName: \"kubernetes.io/projected/c2c2d2c1-e68e-4b14-a732-3b42a6132503-kube-api-access-l5f6r\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.424551 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-scripts\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.424611 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-config-data-custom\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.424646 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-config-data\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.425016 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c2d2c1-e68e-4b14-a732-3b42a6132503-logs\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.427226 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.429529 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-config-data-custom\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.429772 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.430682 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-scripts\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.430843 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.440810 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c2d2c1-e68e-4b14-a732-3b42a6132503-config-data\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.445832 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5f6r\" (UniqueName: \"kubernetes.io/projected/c2c2d2c1-e68e-4b14-a732-3b42a6132503-kube-api-access-l5f6r\") pod \"cinder-api-0\" (UID: \"c2c2d2c1-e68e-4b14-a732-3b42a6132503\") " pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.487831 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:32:31 crc kubenswrapper[4699]: I0226 11:32:31.934587 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:31 crc kubenswrapper[4699]: W0226 11:32:31.939690 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2c2d2c1_e68e_4b14_a732_3b42a6132503.slice/crio-e474cf4adc9bb11aed16ebe0fc2a10f66d43da75ae3ad333d6e6c436ad80c6fe WatchSource:0}: Error finding container e474cf4adc9bb11aed16ebe0fc2a10f66d43da75ae3ad333d6e6c436ad80c6fe: Status 404 returned error can't find the container with id e474cf4adc9bb11aed16ebe0fc2a10f66d43da75ae3ad333d6e6c436ad80c6fe Feb 26 11:32:32 crc kubenswrapper[4699]: I0226 11:32:32.062784 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c2c2d2c1-e68e-4b14-a732-3b42a6132503","Type":"ContainerStarted","Data":"e474cf4adc9bb11aed16ebe0fc2a10f66d43da75ae3ad333d6e6c436ad80c6fe"} Feb 26 11:32:32 crc kubenswrapper[4699]: I0226 11:32:32.275391 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="581ae159-48c4-4821-aede-361485304c59" path="/var/lib/kubelet/pods/581ae159-48c4-4821-aede-361485304c59/volumes" Feb 26 11:32:32 crc kubenswrapper[4699]: I0226 11:32:32.454575 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-67d4f89fb9-65kmq" Feb 26 11:32:33 crc kubenswrapper[4699]: I0226 11:32:33.107611 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c2c2d2c1-e68e-4b14-a732-3b42a6132503","Type":"ContainerStarted","Data":"0a44045c0ef3fbc374b93d9133001d77112bcc42335dae3b11707d390ea07179"} Feb 26 11:32:33 crc kubenswrapper[4699]: I0226 11:32:33.965489 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.094576 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-config-data\") pod \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.094683 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srl4m\" (UniqueName: \"kubernetes.io/projected/7cec2d73-9ca8-4a8b-836d-efce961fbde8-kube-api-access-srl4m\") pod \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.094762 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-run-httpd\") pod \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.094794 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-combined-ca-bundle\") pod \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.094944 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-sg-core-conf-yaml\") pod \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.094993 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-log-httpd\") pod \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.095014 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-scripts\") pod \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\" (UID: \"7cec2d73-9ca8-4a8b-836d-efce961fbde8\") " Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.095079 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7cec2d73-9ca8-4a8b-836d-efce961fbde8" (UID: "7cec2d73-9ca8-4a8b-836d-efce961fbde8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.095608 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7cec2d73-9ca8-4a8b-836d-efce961fbde8" (UID: "7cec2d73-9ca8-4a8b-836d-efce961fbde8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.095673 4699 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.102334 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-scripts" (OuterVolumeSpecName: "scripts") pod "7cec2d73-9ca8-4a8b-836d-efce961fbde8" (UID: "7cec2d73-9ca8-4a8b-836d-efce961fbde8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.106357 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cec2d73-9ca8-4a8b-836d-efce961fbde8-kube-api-access-srl4m" (OuterVolumeSpecName: "kube-api-access-srl4m") pod "7cec2d73-9ca8-4a8b-836d-efce961fbde8" (UID: "7cec2d73-9ca8-4a8b-836d-efce961fbde8"). InnerVolumeSpecName "kube-api-access-srl4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.129779 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c2c2d2c1-e68e-4b14-a732-3b42a6132503","Type":"ContainerStarted","Data":"dfce0dfae871016f2f4e74df9ef312cfcba1295385069eef7f6970c9983c1ca9"} Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.130010 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.132913 4699 generic.go:334] "Generic (PLEG): container finished" podID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" containerID="2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d" exitCode=0 Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.132953 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cec2d73-9ca8-4a8b-836d-efce961fbde8","Type":"ContainerDied","Data":"2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d"} Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.132980 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cec2d73-9ca8-4a8b-836d-efce961fbde8","Type":"ContainerDied","Data":"50c24ca371e65d6a43a9a97ed072f4bd1eadffc6515aa3e571658b4eeec32c3b"} Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.132985 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.132999 4699 scope.go:117] "RemoveContainer" containerID="5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.139796 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-config-data" (OuterVolumeSpecName: "config-data") pod "7cec2d73-9ca8-4a8b-836d-efce961fbde8" (UID: "7cec2d73-9ca8-4a8b-836d-efce961fbde8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.143060 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7cec2d73-9ca8-4a8b-836d-efce961fbde8" (UID: "7cec2d73-9ca8-4a8b-836d-efce961fbde8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.164767 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.164745175 podStartE2EDuration="3.164745175s" podCreationTimestamp="2026-02-26 11:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:34.156954189 +0000 UTC m=+1299.967780643" watchObservedRunningTime="2026-02-26 11:32:34.164745175 +0000 UTC m=+1299.975571619" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.170160 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cec2d73-9ca8-4a8b-836d-efce961fbde8" (UID: "7cec2d73-9ca8-4a8b-836d-efce961fbde8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.196942 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.196983 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srl4m\" (UniqueName: \"kubernetes.io/projected/7cec2d73-9ca8-4a8b-836d-efce961fbde8-kube-api-access-srl4m\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.196995 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.197004 4699 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.197012 4699 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cec2d73-9ca8-4a8b-836d-efce961fbde8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.197020 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cec2d73-9ca8-4a8b-836d-efce961fbde8-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.249513 4699 scope.go:117] "RemoveContainer" containerID="2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.289510 4699 scope.go:117] "RemoveContainer" containerID="5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2" Feb 26 11:32:34 crc kubenswrapper[4699]: E0226 11:32:34.291326 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2\": container with ID starting with 5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2 not found: ID does not exist" containerID="5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.291364 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2"} err="failed to get container status \"5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2\": rpc error: code = NotFound desc = could not find container \"5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2\": container with ID starting with 5b67c1b09ab95a7bde953070b4025e70711be4e1fc09bbc21308bfb6c7bb94c2 not found: ID does not exist" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.291385 4699 scope.go:117] "RemoveContainer" containerID="2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d" Feb 26 11:32:34 crc kubenswrapper[4699]: E0226 11:32:34.291630 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d\": container with ID starting with 2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d not found: ID does not exist" containerID="2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.291672 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d"} err="failed to get container status \"2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d\": rpc error: code = NotFound desc = could not find container \"2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d\": container with ID starting with 2f8839485194d3f95fa8291b794501d86f5716d9498983ff8ecd5c989e725c0d not found: ID does not exist" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.483552 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.493150 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.515161 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:34 crc kubenswrapper[4699]: E0226 11:32:34.515620 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" containerName="sg-core" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.515643 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" containerName="sg-core" Feb 26 11:32:34 crc kubenswrapper[4699]: E0226 11:32:34.515688 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" containerName="ceilometer-notification-agent" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.515697 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" containerName="ceilometer-notification-agent" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.515915 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" containerName="sg-core" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.515940 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" containerName="ceilometer-notification-agent" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.521872 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.525841 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.526174 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.544265 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.708885 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-run-httpd\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.708947 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.709008 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-config-data\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.709336 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-log-httpd\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.709428 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-scripts\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.709586 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkp48\" (UniqueName: \"kubernetes.io/projected/07f96d49-8858-4aca-b9c2-3cf489845764-kube-api-access-rkp48\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.709657 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.811132 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkp48\" (UniqueName: \"kubernetes.io/projected/07f96d49-8858-4aca-b9c2-3cf489845764-kube-api-access-rkp48\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.811181 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.811231 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-run-httpd\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.811260 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.811297 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-config-data\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.811396 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-log-httpd\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.811418 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-scripts\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.813403 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-run-httpd\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.813485 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-log-httpd\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.819304 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-scripts\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.821753 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.830133 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-config-data\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.836047 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.836997 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkp48\" (UniqueName: \"kubernetes.io/projected/07f96d49-8858-4aca-b9c2-3cf489845764-kube-api-access-rkp48\") pod \"ceilometer-0\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4699]: I0226 11:32:34.854500 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:35 crc kubenswrapper[4699]: I0226 11:32:35.360507 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:35 crc kubenswrapper[4699]: W0226 11:32:35.361340 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07f96d49_8858_4aca_b9c2_3cf489845764.slice/crio-47100af0619c1b65c26d0c6fde8e00cd8b96fe31ad0063ee87c9c6c36917918c WatchSource:0}: Error finding container 47100af0619c1b65c26d0c6fde8e00cd8b96fe31ad0063ee87c9c6c36917918c: Status 404 returned error can't find the container with id 47100af0619c1b65c26d0c6fde8e00cd8b96fe31ad0063ee87c9c6c36917918c Feb 26 11:32:35 crc kubenswrapper[4699]: I0226 11:32:35.440156 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 11:32:35 crc kubenswrapper[4699]: I0226 11:32:35.538259 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:32:35 crc kubenswrapper[4699]: I0226 11:32:35.618923 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gg27w"] Feb 26 11:32:35 crc kubenswrapper[4699]: I0226 11:32:35.623431 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" podUID="21ee9717-aaae-4511-9cee-fb022818e57d" containerName="dnsmasq-dns" containerID="cri-o://306402b7645a267592b660f978f8685767bc49fa883947fdba6ed6fa1d54d19c" gracePeriod=10 Feb 26 11:32:35 crc kubenswrapper[4699]: I0226 11:32:35.719613 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.209448 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f96d49-8858-4aca-b9c2-3cf489845764","Type":"ContainerStarted","Data":"47100af0619c1b65c26d0c6fde8e00cd8b96fe31ad0063ee87c9c6c36917918c"} Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.215030 4699 generic.go:334] "Generic (PLEG): container finished" podID="21ee9717-aaae-4511-9cee-fb022818e57d" containerID="306402b7645a267592b660f978f8685767bc49fa883947fdba6ed6fa1d54d19c" exitCode=0 Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.217052 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" event={"ID":"21ee9717-aaae-4511-9cee-fb022818e57d","Type":"ContainerDied","Data":"306402b7645a267592b660f978f8685767bc49fa883947fdba6ed6fa1d54d19c"} Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.279684 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cec2d73-9ca8-4a8b-836d-efce961fbde8" path="/var/lib/kubelet/pods/7cec2d73-9ca8-4a8b-836d-efce961fbde8/volumes" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.310749 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.313400 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.487377 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-sb\") pod \"21ee9717-aaae-4511-9cee-fb022818e57d\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.487880 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-svc\") pod \"21ee9717-aaae-4511-9cee-fb022818e57d\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.487944 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-config\") pod \"21ee9717-aaae-4511-9cee-fb022818e57d\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.487997 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-nb\") pod \"21ee9717-aaae-4511-9cee-fb022818e57d\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.488106 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlcgz\" (UniqueName: \"kubernetes.io/projected/21ee9717-aaae-4511-9cee-fb022818e57d-kube-api-access-mlcgz\") pod \"21ee9717-aaae-4511-9cee-fb022818e57d\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.488169 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-swift-storage-0\") pod \"21ee9717-aaae-4511-9cee-fb022818e57d\" (UID: \"21ee9717-aaae-4511-9cee-fb022818e57d\") " Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.507179 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ee9717-aaae-4511-9cee-fb022818e57d-kube-api-access-mlcgz" (OuterVolumeSpecName: "kube-api-access-mlcgz") pod "21ee9717-aaae-4511-9cee-fb022818e57d" (UID: "21ee9717-aaae-4511-9cee-fb022818e57d"). InnerVolumeSpecName "kube-api-access-mlcgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.548945 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "21ee9717-aaae-4511-9cee-fb022818e57d" (UID: "21ee9717-aaae-4511-9cee-fb022818e57d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.549166 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21ee9717-aaae-4511-9cee-fb022818e57d" (UID: "21ee9717-aaae-4511-9cee-fb022818e57d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.559752 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "21ee9717-aaae-4511-9cee-fb022818e57d" (UID: "21ee9717-aaae-4511-9cee-fb022818e57d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.562801 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "21ee9717-aaae-4511-9cee-fb022818e57d" (UID: "21ee9717-aaae-4511-9cee-fb022818e57d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.577611 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-config" (OuterVolumeSpecName: "config") pod "21ee9717-aaae-4511-9cee-fb022818e57d" (UID: "21ee9717-aaae-4511-9cee-fb022818e57d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.590456 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlcgz\" (UniqueName: \"kubernetes.io/projected/21ee9717-aaae-4511-9cee-fb022818e57d-kube-api-access-mlcgz\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.591031 4699 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.591052 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.591061 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.591071 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.591079 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21ee9717-aaae-4511-9cee-fb022818e57d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.809960 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 26 11:32:36 crc kubenswrapper[4699]: E0226 11:32:36.810573 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ee9717-aaae-4511-9cee-fb022818e57d" containerName="dnsmasq-dns" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.810594 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ee9717-aaae-4511-9cee-fb022818e57d" containerName="dnsmasq-dns" Feb 26 11:32:36 crc kubenswrapper[4699]: E0226 11:32:36.810622 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ee9717-aaae-4511-9cee-fb022818e57d" containerName="init" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.810631 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ee9717-aaae-4511-9cee-fb022818e57d" containerName="init" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.810825 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ee9717-aaae-4511-9cee-fb022818e57d" containerName="dnsmasq-dns" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.811401 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.815453 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xh5tl" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.815457 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.816255 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.820046 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.998942 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.999068 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-openstack-config-secret\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.999366 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-openstack-config\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:36 crc kubenswrapper[4699]: I0226 11:32:36.999673 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxkpt\" (UniqueName: \"kubernetes.io/projected/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-kube-api-access-nxkpt\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.101444 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.101586 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-openstack-config-secret\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.101611 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-openstack-config\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.101727 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxkpt\" (UniqueName: \"kubernetes.io/projected/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-kube-api-access-nxkpt\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.103586 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-openstack-config\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.105938 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-openstack-config-secret\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.106057 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.120890 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxkpt\" (UniqueName: \"kubernetes.io/projected/16db7cc3-bd7c-44aa-b92f-d2a645d96ef0-kube-api-access-nxkpt\") pod \"openstackclient\" (UID: \"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0\") " pod="openstack/openstackclient" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.126980 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.242461 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" event={"ID":"21ee9717-aaae-4511-9cee-fb022818e57d","Type":"ContainerDied","Data":"729fa2fe733b6553118627e4796e6e00ed271782aa89de919499cdfc619cd740"} Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.242515 4699 scope.go:117] "RemoveContainer" containerID="306402b7645a267592b660f978f8685767bc49fa883947fdba6ed6fa1d54d19c" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.242622 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-gg27w" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.251021 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f96d49-8858-4aca-b9c2-3cf489845764","Type":"ContainerStarted","Data":"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3"} Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.251081 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="81e6c561-d55c-48fa-94a9-2dd7d491fd48" containerName="cinder-scheduler" containerID="cri-o://22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981" gracePeriod=30 Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.251110 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="81e6c561-d55c-48fa-94a9-2dd7d491fd48" containerName="probe" containerID="cri-o://85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78" gracePeriod=30 Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.290486 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-78cbc76b59-m6shv"] Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.290709 4699 scope.go:117] "RemoveContainer" containerID="92cf2b1cba562648cb5236aef5b4582d6ded613391d9217a2ee3e5335a2f73cf" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.292062 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.301684 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.301787 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.301946 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.310315 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-78cbc76b59-m6shv"] Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.340312 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gg27w"] Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.372975 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gg27w"] Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.408300 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5a4ece68-df2a-480c-9531-1d133d7f4bd0-etc-swift\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.408345 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a4ece68-df2a-480c-9531-1d133d7f4bd0-run-httpd\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.408378 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-combined-ca-bundle\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.408422 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-public-tls-certs\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.408446 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-config-data\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.408488 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtf8m\" (UniqueName: \"kubernetes.io/projected/5a4ece68-df2a-480c-9531-1d133d7f4bd0-kube-api-access-qtf8m\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.408513 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a4ece68-df2a-480c-9531-1d133d7f4bd0-log-httpd\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.408532 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-internal-tls-certs\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.510799 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5a4ece68-df2a-480c-9531-1d133d7f4bd0-etc-swift\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.510928 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a4ece68-df2a-480c-9531-1d133d7f4bd0-run-httpd\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.511495 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a4ece68-df2a-480c-9531-1d133d7f4bd0-run-httpd\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.511571 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-combined-ca-bundle\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.512094 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-public-tls-certs\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.512153 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-config-data\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.512212 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtf8m\" (UniqueName: \"kubernetes.io/projected/5a4ece68-df2a-480c-9531-1d133d7f4bd0-kube-api-access-qtf8m\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.512250 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a4ece68-df2a-480c-9531-1d133d7f4bd0-log-httpd\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.512279 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-internal-tls-certs\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.513856 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a4ece68-df2a-480c-9531-1d133d7f4bd0-log-httpd\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.516860 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-public-tls-certs\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.517043 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-internal-tls-certs\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.517582 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-config-data\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.517759 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5a4ece68-df2a-480c-9531-1d133d7f4bd0-etc-swift\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.524211 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4ece68-df2a-480c-9531-1d133d7f4bd0-combined-ca-bundle\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.533808 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtf8m\" (UniqueName: \"kubernetes.io/projected/5a4ece68-df2a-480c-9531-1d133d7f4bd0-kube-api-access-qtf8m\") pod \"swift-proxy-78cbc76b59-m6shv\" (UID: \"5a4ece68-df2a-480c-9531-1d133d7f4bd0\") " pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.650822 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 11:32:37 crc kubenswrapper[4699]: W0226 11:32:37.654788 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16db7cc3_bd7c_44aa_b92f_d2a645d96ef0.slice/crio-2ee3b4fc6642667decf6be1b2b1d77f395d542f2813c79d3c87ab1a802f09f49 WatchSource:0}: Error finding container 2ee3b4fc6642667decf6be1b2b1d77f395d542f2813c79d3c87ab1a802f09f49: Status 404 returned error can't find the container with id 2ee3b4fc6642667decf6be1b2b1d77f395d542f2813c79d3c87ab1a802f09f49 Feb 26 11:32:37 crc kubenswrapper[4699]: I0226 11:32:37.667367 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:38 crc kubenswrapper[4699]: I0226 11:32:38.131470 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-57899c756d-w9pc5" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:32:38 crc kubenswrapper[4699]: I0226 11:32:38.289162 4699 generic.go:334] "Generic (PLEG): container finished" podID="81e6c561-d55c-48fa-94a9-2dd7d491fd48" containerID="85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78" exitCode=0 Feb 26 11:32:38 crc kubenswrapper[4699]: W0226 11:32:38.303716 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a4ece68_df2a_480c_9531_1d133d7f4bd0.slice/crio-dbf6e98d742fcea8eca1e3480f96f547bbcf81fb4012078188fbc090746daeab WatchSource:0}: Error finding container dbf6e98d742fcea8eca1e3480f96f547bbcf81fb4012078188fbc090746daeab: Status 404 returned error can't find the container with id dbf6e98d742fcea8eca1e3480f96f547bbcf81fb4012078188fbc090746daeab Feb 26 11:32:38 crc kubenswrapper[4699]: I0226 11:32:38.308199 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ee9717-aaae-4511-9cee-fb022818e57d" path="/var/lib/kubelet/pods/21ee9717-aaae-4511-9cee-fb022818e57d/volumes" Feb 26 11:32:38 crc kubenswrapper[4699]: I0226 11:32:38.308794 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81e6c561-d55c-48fa-94a9-2dd7d491fd48","Type":"ContainerDied","Data":"85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78"} Feb 26 11:32:38 crc kubenswrapper[4699]: I0226 11:32:38.308829 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0","Type":"ContainerStarted","Data":"2ee3b4fc6642667decf6be1b2b1d77f395d542f2813c79d3c87ab1a802f09f49"} Feb 26 11:32:38 crc kubenswrapper[4699]: I0226 11:32:38.308844 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-78cbc76b59-m6shv"] Feb 26 11:32:38 crc kubenswrapper[4699]: I0226 11:32:38.308859 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f96d49-8858-4aca-b9c2-3cf489845764","Type":"ContainerStarted","Data":"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9"} Feb 26 11:32:38 crc kubenswrapper[4699]: I0226 11:32:38.308869 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f96d49-8858-4aca-b9c2-3cf489845764","Type":"ContainerStarted","Data":"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c"} Feb 26 11:32:39 crc kubenswrapper[4699]: I0226 11:32:39.320871 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78cbc76b59-m6shv" event={"ID":"5a4ece68-df2a-480c-9531-1d133d7f4bd0","Type":"ContainerStarted","Data":"9d313c26fcbd3d6642064b0ae4b90d726851d1cb87e4a49ead108da7f89fa77e"} Feb 26 11:32:39 crc kubenswrapper[4699]: I0226 11:32:39.321284 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78cbc76b59-m6shv" event={"ID":"5a4ece68-df2a-480c-9531-1d133d7f4bd0","Type":"ContainerStarted","Data":"acd5aa6b8be65943874c4007750d0de6cfc1464d6616e24207131683c54b76b0"} Feb 26 11:32:39 crc kubenswrapper[4699]: I0226 11:32:39.321304 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78cbc76b59-m6shv" event={"ID":"5a4ece68-df2a-480c-9531-1d133d7f4bd0","Type":"ContainerStarted","Data":"dbf6e98d742fcea8eca1e3480f96f547bbcf81fb4012078188fbc090746daeab"} Feb 26 11:32:39 crc kubenswrapper[4699]: I0226 11:32:39.321324 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:39 crc kubenswrapper[4699]: I0226 11:32:39.321339 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:39 crc kubenswrapper[4699]: I0226 11:32:39.352656 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-78cbc76b59-m6shv" podStartSLOduration=2.352640116 podStartE2EDuration="2.352640116s" podCreationTimestamp="2026-02-26 11:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:39.35209305 +0000 UTC m=+1305.162919524" watchObservedRunningTime="2026-02-26 11:32:39.352640116 +0000 UTC m=+1305.163466560" Feb 26 11:32:40 crc kubenswrapper[4699]: I0226 11:32:40.855063 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.338434 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f96d49-8858-4aca-b9c2-3cf489845764","Type":"ContainerStarted","Data":"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351"} Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.338603 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="ceilometer-central-agent" containerID="cri-o://e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3" gracePeriod=30 Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.338804 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="proxy-httpd" containerID="cri-o://c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351" gracePeriod=30 Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.338818 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="sg-core" containerID="cri-o://02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9" gracePeriod=30 Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.338828 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="ceilometer-notification-agent" containerID="cri-o://9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c" gracePeriod=30 Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.339014 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.373926 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.881027367 podStartE2EDuration="7.373910446s" podCreationTimestamp="2026-02-26 11:32:34 +0000 UTC" firstStartedPulling="2026-02-26 11:32:35.369286336 +0000 UTC m=+1301.180112770" lastFinishedPulling="2026-02-26 11:32:40.862169415 +0000 UTC m=+1306.672995849" observedRunningTime="2026-02-26 11:32:41.3726405 +0000 UTC m=+1307.183466934" watchObservedRunningTime="2026-02-26 11:32:41.373910446 +0000 UTC m=+1307.184736870" Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.796760 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.900842 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81e6c561-d55c-48fa-94a9-2dd7d491fd48-etc-machine-id\") pod \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.900973 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-scripts\") pod \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.900972 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81e6c561-d55c-48fa-94a9-2dd7d491fd48-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "81e6c561-d55c-48fa-94a9-2dd7d491fd48" (UID: "81e6c561-d55c-48fa-94a9-2dd7d491fd48"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.901087 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9rw5\" (UniqueName: \"kubernetes.io/projected/81e6c561-d55c-48fa-94a9-2dd7d491fd48-kube-api-access-j9rw5\") pod \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.901165 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data\") pod \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.901231 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data-custom\") pod \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.901263 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-combined-ca-bundle\") pod \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\" (UID: \"81e6c561-d55c-48fa-94a9-2dd7d491fd48\") " Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.901615 4699 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81e6c561-d55c-48fa-94a9-2dd7d491fd48-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.906539 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-scripts" (OuterVolumeSpecName: "scripts") pod "81e6c561-d55c-48fa-94a9-2dd7d491fd48" (UID: "81e6c561-d55c-48fa-94a9-2dd7d491fd48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.909300 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "81e6c561-d55c-48fa-94a9-2dd7d491fd48" (UID: "81e6c561-d55c-48fa-94a9-2dd7d491fd48"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.922142 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e6c561-d55c-48fa-94a9-2dd7d491fd48-kube-api-access-j9rw5" (OuterVolumeSpecName: "kube-api-access-j9rw5") pod "81e6c561-d55c-48fa-94a9-2dd7d491fd48" (UID: "81e6c561-d55c-48fa-94a9-2dd7d491fd48"). InnerVolumeSpecName "kube-api-access-j9rw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:41 crc kubenswrapper[4699]: I0226 11:32:41.974291 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81e6c561-d55c-48fa-94a9-2dd7d491fd48" (UID: "81e6c561-d55c-48fa-94a9-2dd7d491fd48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.003045 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.003067 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9rw5\" (UniqueName: \"kubernetes.io/projected/81e6c561-d55c-48fa-94a9-2dd7d491fd48-kube-api-access-j9rw5\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.003076 4699 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.003132 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.041201 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data" (OuterVolumeSpecName: "config-data") pod "81e6c561-d55c-48fa-94a9-2dd7d491fd48" (UID: "81e6c561-d55c-48fa-94a9-2dd7d491fd48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.104736 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e6c561-d55c-48fa-94a9-2dd7d491fd48-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.124898 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.313734 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "07f96d49-8858-4aca-b9c2-3cf489845764" (UID: "07f96d49-8858-4aca-b9c2-3cf489845764"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.310846 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-log-httpd\") pod \"07f96d49-8858-4aca-b9c2-3cf489845764\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.317789 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-run-httpd\") pod \"07f96d49-8858-4aca-b9c2-3cf489845764\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.318060 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkp48\" (UniqueName: \"kubernetes.io/projected/07f96d49-8858-4aca-b9c2-3cf489845764-kube-api-access-rkp48\") pod \"07f96d49-8858-4aca-b9c2-3cf489845764\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.318302 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-sg-core-conf-yaml\") pod \"07f96d49-8858-4aca-b9c2-3cf489845764\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.318637 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-combined-ca-bundle\") pod \"07f96d49-8858-4aca-b9c2-3cf489845764\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.318766 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-scripts\") pod \"07f96d49-8858-4aca-b9c2-3cf489845764\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.321469 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-config-data\") pod \"07f96d49-8858-4aca-b9c2-3cf489845764\" (UID: \"07f96d49-8858-4aca-b9c2-3cf489845764\") " Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.322256 4699 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.317966 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "07f96d49-8858-4aca-b9c2-3cf489845764" (UID: "07f96d49-8858-4aca-b9c2-3cf489845764"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.332343 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-scripts" (OuterVolumeSpecName: "scripts") pod "07f96d49-8858-4aca-b9c2-3cf489845764" (UID: "07f96d49-8858-4aca-b9c2-3cf489845764"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.332453 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f96d49-8858-4aca-b9c2-3cf489845764-kube-api-access-rkp48" (OuterVolumeSpecName: "kube-api-access-rkp48") pod "07f96d49-8858-4aca-b9c2-3cf489845764" (UID: "07f96d49-8858-4aca-b9c2-3cf489845764"). InnerVolumeSpecName "kube-api-access-rkp48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.353228 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "07f96d49-8858-4aca-b9c2-3cf489845764" (UID: "07f96d49-8858-4aca-b9c2-3cf489845764"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.359223 4699 generic.go:334] "Generic (PLEG): container finished" podID="07f96d49-8858-4aca-b9c2-3cf489845764" containerID="c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351" exitCode=0 Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.359284 4699 generic.go:334] "Generic (PLEG): container finished" podID="07f96d49-8858-4aca-b9c2-3cf489845764" containerID="02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9" exitCode=2 Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.359294 4699 generic.go:334] "Generic (PLEG): container finished" podID="07f96d49-8858-4aca-b9c2-3cf489845764" containerID="9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c" exitCode=0 Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.359303 4699 generic.go:334] "Generic (PLEG): container finished" podID="07f96d49-8858-4aca-b9c2-3cf489845764" containerID="e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3" exitCode=0 Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.359407 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f96d49-8858-4aca-b9c2-3cf489845764","Type":"ContainerDied","Data":"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351"} Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.359444 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f96d49-8858-4aca-b9c2-3cf489845764","Type":"ContainerDied","Data":"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9"} Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.359460 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f96d49-8858-4aca-b9c2-3cf489845764","Type":"ContainerDied","Data":"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c"} Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.359473 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f96d49-8858-4aca-b9c2-3cf489845764","Type":"ContainerDied","Data":"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3"} Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.359483 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f96d49-8858-4aca-b9c2-3cf489845764","Type":"ContainerDied","Data":"47100af0619c1b65c26d0c6fde8e00cd8b96fe31ad0063ee87c9c6c36917918c"} Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.359502 4699 scope.go:117] "RemoveContainer" containerID="c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.359693 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.366137 4699 generic.go:334] "Generic (PLEG): container finished" podID="81e6c561-d55c-48fa-94a9-2dd7d491fd48" containerID="22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981" exitCode=0 Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.369250 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.372209 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81e6c561-d55c-48fa-94a9-2dd7d491fd48","Type":"ContainerDied","Data":"22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981"} Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.372269 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81e6c561-d55c-48fa-94a9-2dd7d491fd48","Type":"ContainerDied","Data":"48e983e75dfbee9e41159572aae0afa12ee51c7366ffabb530747e91bb647659"} Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.388013 4699 scope.go:117] "RemoveContainer" containerID="02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.416279 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.422363 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07f96d49-8858-4aca-b9c2-3cf489845764" (UID: "07f96d49-8858-4aca-b9c2-3cf489845764"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.433291 4699 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f96d49-8858-4aca-b9c2-3cf489845764-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.433325 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkp48\" (UniqueName: \"kubernetes.io/projected/07f96d49-8858-4aca-b9c2-3cf489845764-kube-api-access-rkp48\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.433336 4699 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.433345 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.433353 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.437761 4699 scope.go:117] "RemoveContainer" containerID="9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.438843 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-config-data" (OuterVolumeSpecName: "config-data") pod "07f96d49-8858-4aca-b9c2-3cf489845764" (UID: "07f96d49-8858-4aca-b9c2-3cf489845764"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.439691 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.453673 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.454103 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="ceilometer-notification-agent" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454138 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="ceilometer-notification-agent" Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.454153 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="sg-core" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454159 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="sg-core" Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.454171 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81e6c561-d55c-48fa-94a9-2dd7d491fd48" containerName="probe" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454178 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="81e6c561-d55c-48fa-94a9-2dd7d491fd48" containerName="probe" Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.454190 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81e6c561-d55c-48fa-94a9-2dd7d491fd48" containerName="cinder-scheduler" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454196 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="81e6c561-d55c-48fa-94a9-2dd7d491fd48" containerName="cinder-scheduler" Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.454212 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="ceilometer-central-agent" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454219 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="ceilometer-central-agent" Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.454240 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="proxy-httpd" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454246 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="proxy-httpd" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454412 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="81e6c561-d55c-48fa-94a9-2dd7d491fd48" containerName="probe" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454423 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="sg-core" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454438 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="81e6c561-d55c-48fa-94a9-2dd7d491fd48" containerName="cinder-scheduler" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454450 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="ceilometer-central-agent" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454464 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="proxy-httpd" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.454475 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" containerName="ceilometer-notification-agent" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.455468 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.458181 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.463360 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.500351 4699 scope.go:117] "RemoveContainer" containerID="e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.524105 4699 scope.go:117] "RemoveContainer" containerID="c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351" Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.524585 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351\": container with ID starting with c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351 not found: ID does not exist" containerID="c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.524613 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351"} err="failed to get container status \"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351\": rpc error: code = NotFound desc = could not find container \"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351\": container with ID starting with c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.524634 4699 scope.go:117] "RemoveContainer" containerID="02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9" Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.525045 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9\": container with ID starting with 02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9 not found: ID does not exist" containerID="02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.525066 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9"} err="failed to get container status \"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9\": rpc error: code = NotFound desc = could not find container \"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9\": container with ID starting with 02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.525079 4699 scope.go:117] "RemoveContainer" containerID="9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c" Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.525824 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c\": container with ID starting with 9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c not found: ID does not exist" containerID="9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.525865 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c"} err="failed to get container status \"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c\": rpc error: code = NotFound desc = could not find container \"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c\": container with ID starting with 9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.525897 4699 scope.go:117] "RemoveContainer" containerID="e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3" Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.526236 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3\": container with ID starting with e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3 not found: ID does not exist" containerID="e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.526261 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3"} err="failed to get container status \"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3\": rpc error: code = NotFound desc = could not find container \"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3\": container with ID starting with e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.526278 4699 scope.go:117] "RemoveContainer" containerID="c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.526552 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351"} err="failed to get container status \"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351\": rpc error: code = NotFound desc = could not find container \"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351\": container with ID starting with c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.526576 4699 scope.go:117] "RemoveContainer" containerID="02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.526910 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9"} err="failed to get container status \"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9\": rpc error: code = NotFound desc = could not find container \"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9\": container with ID starting with 02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.526958 4699 scope.go:117] "RemoveContainer" containerID="9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.527765 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c"} err="failed to get container status \"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c\": rpc error: code = NotFound desc = could not find container \"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c\": container with ID starting with 9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.527838 4699 scope.go:117] "RemoveContainer" containerID="e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.528364 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3"} err="failed to get container status \"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3\": rpc error: code = NotFound desc = could not find container \"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3\": container with ID starting with e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.528394 4699 scope.go:117] "RemoveContainer" containerID="c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.528687 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351"} err="failed to get container status \"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351\": rpc error: code = NotFound desc = could not find container \"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351\": container with ID starting with c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.528713 4699 scope.go:117] "RemoveContainer" containerID="02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.528885 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9"} err="failed to get container status \"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9\": rpc error: code = NotFound desc = could not find container \"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9\": container with ID starting with 02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.528907 4699 scope.go:117] "RemoveContainer" containerID="9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.529224 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c"} err="failed to get container status \"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c\": rpc error: code = NotFound desc = could not find container \"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c\": container with ID starting with 9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.529248 4699 scope.go:117] "RemoveContainer" containerID="e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.529713 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3"} err="failed to get container status \"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3\": rpc error: code = NotFound desc = could not find container \"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3\": container with ID starting with e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.529739 4699 scope.go:117] "RemoveContainer" containerID="c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.529996 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351"} err="failed to get container status \"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351\": rpc error: code = NotFound desc = could not find container \"c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351\": container with ID starting with c714dd3df08d24893c1686f2182b14c09d893c6e1d7a839c51ccd459a3ce9351 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.530220 4699 scope.go:117] "RemoveContainer" containerID="02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.530674 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9"} err="failed to get container status \"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9\": rpc error: code = NotFound desc = could not find container \"02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9\": container with ID starting with 02dd2ccd8503a1659c35420f8d63f4295c914297fd09360844038e1619316fe9 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.530706 4699 scope.go:117] "RemoveContainer" containerID="9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.531197 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c"} err="failed to get container status \"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c\": rpc error: code = NotFound desc = could not find container \"9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c\": container with ID starting with 9d1a910b1344e0f76de46c17e25b2c0531d04770743ec34ccf90ed802219e36c not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.531222 4699 scope.go:117] "RemoveContainer" containerID="e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.531497 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3"} err="failed to get container status \"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3\": rpc error: code = NotFound desc = could not find container \"e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3\": container with ID starting with e4311314351f6e5a0b18fcaf16a77e48c75c4d300a5b98e2ac381a56049f63e3 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.531523 4699 scope.go:117] "RemoveContainer" containerID="85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.534810 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f96d49-8858-4aca-b9c2-3cf489845764-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.557511 4699 scope.go:117] "RemoveContainer" containerID="22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.586811 4699 scope.go:117] "RemoveContainer" containerID="85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78" Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.587286 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78\": container with ID starting with 85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78 not found: ID does not exist" containerID="85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.587341 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78"} err="failed to get container status \"85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78\": rpc error: code = NotFound desc = could not find container \"85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78\": container with ID starting with 85950baeb6758614d608c7fa199dcfa1f953ec17c373ed9a7510133fe5c2bb78 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.587370 4699 scope.go:117] "RemoveContainer" containerID="22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981" Feb 26 11:32:42 crc kubenswrapper[4699]: E0226 11:32:42.587624 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981\": container with ID starting with 22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981 not found: ID does not exist" containerID="22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.587654 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981"} err="failed to get container status \"22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981\": rpc error: code = NotFound desc = could not find container \"22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981\": container with ID starting with 22eb49c2ddda393e2d578a662fade5ebe09a2739d93f76bbdbb5c7f8759be981 not found: ID does not exist" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.637066 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.637153 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.637211 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbf1f488-444f-45d3-b5e6-44506bf45f8e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.637273 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.637407 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr9rw\" (UniqueName: \"kubernetes.io/projected/fbf1f488-444f-45d3-b5e6-44506bf45f8e-kube-api-access-rr9rw\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.637475 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.696163 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.704240 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.735194 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.738511 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.739350 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbf1f488-444f-45d3-b5e6-44506bf45f8e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.739435 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbf1f488-444f-45d3-b5e6-44506bf45f8e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.739555 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.739699 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr9rw\" (UniqueName: \"kubernetes.io/projected/fbf1f488-444f-45d3-b5e6-44506bf45f8e-kube-api-access-rr9rw\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.739812 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.739905 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.739994 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.743884 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.744328 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.744986 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.744994 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.746406 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.748450 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf1f488-444f-45d3-b5e6-44506bf45f8e-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.752515 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.785501 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr9rw\" (UniqueName: \"kubernetes.io/projected/fbf1f488-444f-45d3-b5e6-44506bf45f8e-kube-api-access-rr9rw\") pod \"cinder-scheduler-0\" (UID: \"fbf1f488-444f-45d3-b5e6-44506bf45f8e\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.787665 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.841643 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-log-httpd\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.841691 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-run-httpd\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.841727 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.841771 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.841798 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-scripts\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.841897 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-config-data\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.841926 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnc2v\" (UniqueName: \"kubernetes.io/projected/6b07016c-61a8-4b19-8635-4f6475523855-kube-api-access-vnc2v\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.945015 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-log-httpd\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.945456 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-run-httpd\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.945496 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.945555 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.945585 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-scripts\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.945610 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-config-data\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.945638 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnc2v\" (UniqueName: \"kubernetes.io/projected/6b07016c-61a8-4b19-8635-4f6475523855-kube-api-access-vnc2v\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.946065 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-run-httpd\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.946455 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-log-httpd\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.952478 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-scripts\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.952840 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-config-data\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.962161 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.964065 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:42 crc kubenswrapper[4699]: I0226 11:32:42.966132 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnc2v\" (UniqueName: \"kubernetes.io/projected/6b07016c-61a8-4b19-8635-4f6475523855-kube-api-access-vnc2v\") pod \"ceilometer-0\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " pod="openstack/ceilometer-0" Feb 26 11:32:43 crc kubenswrapper[4699]: I0226 11:32:43.090930 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:43 crc kubenswrapper[4699]: I0226 11:32:43.510341 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:43 crc kubenswrapper[4699]: I0226 11:32:43.741058 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:44 crc kubenswrapper[4699]: I0226 11:32:44.151723 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 26 11:32:44 crc kubenswrapper[4699]: I0226 11:32:44.288931 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f96d49-8858-4aca-b9c2-3cf489845764" path="/var/lib/kubelet/pods/07f96d49-8858-4aca-b9c2-3cf489845764/volumes" Feb 26 11:32:44 crc kubenswrapper[4699]: I0226 11:32:44.290010 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e6c561-d55c-48fa-94a9-2dd7d491fd48" path="/var/lib/kubelet/pods/81e6c561-d55c-48fa-94a9-2dd7d491fd48/volumes" Feb 26 11:32:44 crc kubenswrapper[4699]: I0226 11:32:44.417369 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbf1f488-444f-45d3-b5e6-44506bf45f8e","Type":"ContainerStarted","Data":"d7313f7d812f0c721fef8f099cad83da0fa5cd005f09edd6e1f3b5f85eb5c41c"} Feb 26 11:32:44 crc kubenswrapper[4699]: I0226 11:32:44.417685 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbf1f488-444f-45d3-b5e6-44506bf45f8e","Type":"ContainerStarted","Data":"ba90720c72681e041c30d96ae30052b46b323ce9d0eb3d66eef995ce500a24cd"} Feb 26 11:32:44 crc kubenswrapper[4699]: I0226 11:32:44.420093 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b07016c-61a8-4b19-8635-4f6475523855","Type":"ContainerStarted","Data":"902e07d96029273f87858340fd822c319ca3a6b168bf4b0377c625b530b7ae55"} Feb 26 11:32:45 crc kubenswrapper[4699]: I0226 11:32:45.433855 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbf1f488-444f-45d3-b5e6-44506bf45f8e","Type":"ContainerStarted","Data":"6a6319f2fcf1acee6f01d40acf526a906716c01afd194e182603f39590d2124d"} Feb 26 11:32:45 crc kubenswrapper[4699]: I0226 11:32:45.442973 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b07016c-61a8-4b19-8635-4f6475523855","Type":"ContainerStarted","Data":"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399"} Feb 26 11:32:45 crc kubenswrapper[4699]: I0226 11:32:45.443013 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b07016c-61a8-4b19-8635-4f6475523855","Type":"ContainerStarted","Data":"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924"} Feb 26 11:32:45 crc kubenswrapper[4699]: I0226 11:32:45.456217 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.4562006 podStartE2EDuration="3.4562006s" podCreationTimestamp="2026-02-26 11:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:45.452356909 +0000 UTC m=+1311.263183343" watchObservedRunningTime="2026-02-26 11:32:45.4562006 +0000 UTC m=+1311.267027034" Feb 26 11:32:45 crc kubenswrapper[4699]: I0226 11:32:45.830457 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:47 crc kubenswrapper[4699]: I0226 11:32:47.465826 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b07016c-61a8-4b19-8635-4f6475523855","Type":"ContainerStarted","Data":"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb"} Feb 26 11:32:47 crc kubenswrapper[4699]: I0226 11:32:47.673746 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:47 crc kubenswrapper[4699]: I0226 11:32:47.675696 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-78cbc76b59-m6shv" Feb 26 11:32:47 crc kubenswrapper[4699]: I0226 11:32:47.788239 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 11:32:48 crc kubenswrapper[4699]: I0226 11:32:48.130947 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-57899c756d-w9pc5" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:32:48 crc kubenswrapper[4699]: I0226 11:32:48.131095 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:32:48 crc kubenswrapper[4699]: I0226 11:32:48.489735 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d45896d49-mh862" Feb 26 11:32:48 crc kubenswrapper[4699]: I0226 11:32:48.581096 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59dd795c56-7kv72"] Feb 26 11:32:48 crc kubenswrapper[4699]: I0226 11:32:48.581483 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59dd795c56-7kv72" podUID="715a80f0-cdba-439c-8a82-4838bf8f7e50" containerName="neutron-api" containerID="cri-o://e5405c70871ee395752c1da1df07066a938aedcb1ac422f960283753ab469ce2" gracePeriod=30 Feb 26 11:32:48 crc kubenswrapper[4699]: I0226 11:32:48.582090 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59dd795c56-7kv72" podUID="715a80f0-cdba-439c-8a82-4838bf8f7e50" containerName="neutron-httpd" containerID="cri-o://79c878075032024e487997f5af9db4e3be830d392f6df8fc08ba5dbf79596db4" gracePeriod=30 Feb 26 11:32:49 crc kubenswrapper[4699]: I0226 11:32:49.487900 4699 generic.go:334] "Generic (PLEG): container finished" podID="715a80f0-cdba-439c-8a82-4838bf8f7e50" containerID="79c878075032024e487997f5af9db4e3be830d392f6df8fc08ba5dbf79596db4" exitCode=0 Feb 26 11:32:49 crc kubenswrapper[4699]: I0226 11:32:49.487968 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dd795c56-7kv72" event={"ID":"715a80f0-cdba-439c-8a82-4838bf8f7e50","Type":"ContainerDied","Data":"79c878075032024e487997f5af9db4e3be830d392f6df8fc08ba5dbf79596db4"} Feb 26 11:32:53 crc kubenswrapper[4699]: I0226 11:32:53.154417 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 11:32:53 crc kubenswrapper[4699]: I0226 11:32:53.530949 4699 generic.go:334] "Generic (PLEG): container finished" podID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerID="de9a25314ef41f7d3414b57dcaeec2a9add4d5ecb708b80dc9af27c79856ba9b" exitCode=137 Feb 26 11:32:53 crc kubenswrapper[4699]: I0226 11:32:53.530990 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57899c756d-w9pc5" event={"ID":"78d85906-b78a-46eb-b5dd-4da95c1222d8","Type":"ContainerDied","Data":"de9a25314ef41f7d3414b57dcaeec2a9add4d5ecb708b80dc9af27c79856ba9b"} Feb 26 11:32:53 crc kubenswrapper[4699]: I0226 11:32:53.958248 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.101302 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d85906-b78a-46eb-b5dd-4da95c1222d8-logs\") pod \"78d85906-b78a-46eb-b5dd-4da95c1222d8\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.102164 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78d85906-b78a-46eb-b5dd-4da95c1222d8-logs" (OuterVolumeSpecName: "logs") pod "78d85906-b78a-46eb-b5dd-4da95c1222d8" (UID: "78d85906-b78a-46eb-b5dd-4da95c1222d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.102338 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-scripts\") pod \"78d85906-b78a-46eb-b5dd-4da95c1222d8\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.102988 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-combined-ca-bundle\") pod \"78d85906-b78a-46eb-b5dd-4da95c1222d8\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.103045 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-config-data\") pod \"78d85906-b78a-46eb-b5dd-4da95c1222d8\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.103083 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-tls-certs\") pod \"78d85906-b78a-46eb-b5dd-4da95c1222d8\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.103300 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-secret-key\") pod \"78d85906-b78a-46eb-b5dd-4da95c1222d8\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.103351 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-252gw\" (UniqueName: \"kubernetes.io/projected/78d85906-b78a-46eb-b5dd-4da95c1222d8-kube-api-access-252gw\") pod \"78d85906-b78a-46eb-b5dd-4da95c1222d8\" (UID: \"78d85906-b78a-46eb-b5dd-4da95c1222d8\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.104376 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d85906-b78a-46eb-b5dd-4da95c1222d8-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.108174 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "78d85906-b78a-46eb-b5dd-4da95c1222d8" (UID: "78d85906-b78a-46eb-b5dd-4da95c1222d8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.112129 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d85906-b78a-46eb-b5dd-4da95c1222d8-kube-api-access-252gw" (OuterVolumeSpecName: "kube-api-access-252gw") pod "78d85906-b78a-46eb-b5dd-4da95c1222d8" (UID: "78d85906-b78a-46eb-b5dd-4da95c1222d8"). InnerVolumeSpecName "kube-api-access-252gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.134566 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-scripts" (OuterVolumeSpecName: "scripts") pod "78d85906-b78a-46eb-b5dd-4da95c1222d8" (UID: "78d85906-b78a-46eb-b5dd-4da95c1222d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.135554 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-config-data" (OuterVolumeSpecName: "config-data") pod "78d85906-b78a-46eb-b5dd-4da95c1222d8" (UID: "78d85906-b78a-46eb-b5dd-4da95c1222d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.137308 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78d85906-b78a-46eb-b5dd-4da95c1222d8" (UID: "78d85906-b78a-46eb-b5dd-4da95c1222d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.186189 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "78d85906-b78a-46eb-b5dd-4da95c1222d8" (UID: "78d85906-b78a-46eb-b5dd-4da95c1222d8"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.208308 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.208342 4699 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.208356 4699 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.208371 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-252gw\" (UniqueName: \"kubernetes.io/projected/78d85906-b78a-46eb-b5dd-4da95c1222d8-kube-api-access-252gw\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.208385 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d85906-b78a-46eb-b5dd-4da95c1222d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.208396 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d85906-b78a-46eb-b5dd-4da95c1222d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.540811 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"16db7cc3-bd7c-44aa-b92f-d2a645d96ef0","Type":"ContainerStarted","Data":"7a084309507e408ca5233a482e41c1bf08c7da3ff18ca5d93123d8caac0c9c63"} Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.543890 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57899c756d-w9pc5" event={"ID":"78d85906-b78a-46eb-b5dd-4da95c1222d8","Type":"ContainerDied","Data":"70b6c63ca13b9c59a7d033612c4fd91b9c2d11c7f06db99a50ef89d5c7c7c5da"} Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.544169 4699 scope.go:117] "RemoveContainer" containerID="5570b961c7c2f73533bbe65fa87a9f8cc0b880e79add1f25b918377e32b9375d" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.544267 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57899c756d-w9pc5" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.556161 4699 generic.go:334] "Generic (PLEG): container finished" podID="715a80f0-cdba-439c-8a82-4838bf8f7e50" containerID="e5405c70871ee395752c1da1df07066a938aedcb1ac422f960283753ab469ce2" exitCode=0 Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.556225 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dd795c56-7kv72" event={"ID":"715a80f0-cdba-439c-8a82-4838bf8f7e50","Type":"ContainerDied","Data":"e5405c70871ee395752c1da1df07066a938aedcb1ac422f960283753ab469ce2"} Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.578487 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.671453034 podStartE2EDuration="18.578468089s" podCreationTimestamp="2026-02-26 11:32:36 +0000 UTC" firstStartedPulling="2026-02-26 11:32:37.657299522 +0000 UTC m=+1303.468125956" lastFinishedPulling="2026-02-26 11:32:53.564314577 +0000 UTC m=+1319.375141011" observedRunningTime="2026-02-26 11:32:54.57539743 +0000 UTC m=+1320.386223884" watchObservedRunningTime="2026-02-26 11:32:54.578468089 +0000 UTC m=+1320.389294523" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.591920 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b07016c-61a8-4b19-8635-4f6475523855","Type":"ContainerStarted","Data":"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca"} Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.592144 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="ceilometer-central-agent" containerID="cri-o://a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924" gracePeriod=30 Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.592462 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.592790 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="proxy-httpd" containerID="cri-o://f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca" gracePeriod=30 Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.592846 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="sg-core" containerID="cri-o://8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb" gracePeriod=30 Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.592887 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="ceilometer-notification-agent" containerID="cri-o://b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399" gracePeriod=30 Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.622130 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.648336 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8557516720000002 podStartE2EDuration="12.648315398s" podCreationTimestamp="2026-02-26 11:32:42 +0000 UTC" firstStartedPulling="2026-02-26 11:32:43.765570862 +0000 UTC m=+1309.576397296" lastFinishedPulling="2026-02-26 11:32:53.558134588 +0000 UTC m=+1319.368961022" observedRunningTime="2026-02-26 11:32:54.641299614 +0000 UTC m=+1320.452126068" watchObservedRunningTime="2026-02-26 11:32:54.648315398 +0000 UTC m=+1320.459141832" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.720666 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-ovndb-tls-certs\") pod \"715a80f0-cdba-439c-8a82-4838bf8f7e50\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.720764 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st8wb\" (UniqueName: \"kubernetes.io/projected/715a80f0-cdba-439c-8a82-4838bf8f7e50-kube-api-access-st8wb\") pod \"715a80f0-cdba-439c-8a82-4838bf8f7e50\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.720842 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-combined-ca-bundle\") pod \"715a80f0-cdba-439c-8a82-4838bf8f7e50\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.720986 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-httpd-config\") pod \"715a80f0-cdba-439c-8a82-4838bf8f7e50\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.721030 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-config\") pod \"715a80f0-cdba-439c-8a82-4838bf8f7e50\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.730147 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57899c756d-w9pc5"] Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.734650 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715a80f0-cdba-439c-8a82-4838bf8f7e50-kube-api-access-st8wb" (OuterVolumeSpecName: "kube-api-access-st8wb") pod "715a80f0-cdba-439c-8a82-4838bf8f7e50" (UID: "715a80f0-cdba-439c-8a82-4838bf8f7e50"). InnerVolumeSpecName "kube-api-access-st8wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.740439 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-57899c756d-w9pc5"] Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.752325 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "715a80f0-cdba-439c-8a82-4838bf8f7e50" (UID: "715a80f0-cdba-439c-8a82-4838bf8f7e50"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.816318 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "715a80f0-cdba-439c-8a82-4838bf8f7e50" (UID: "715a80f0-cdba-439c-8a82-4838bf8f7e50"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.822201 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-config" (OuterVolumeSpecName: "config") pod "715a80f0-cdba-439c-8a82-4838bf8f7e50" (UID: "715a80f0-cdba-439c-8a82-4838bf8f7e50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.822928 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-config\") pod \"715a80f0-cdba-439c-8a82-4838bf8f7e50\" (UID: \"715a80f0-cdba-439c-8a82-4838bf8f7e50\") " Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.823866 4699 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.823888 4699 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.823901 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st8wb\" (UniqueName: \"kubernetes.io/projected/715a80f0-cdba-439c-8a82-4838bf8f7e50-kube-api-access-st8wb\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:54 crc kubenswrapper[4699]: W0226 11:32:54.823995 4699 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/715a80f0-cdba-439c-8a82-4838bf8f7e50/volumes/kubernetes.io~secret/config Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.824016 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-config" (OuterVolumeSpecName: "config") pod "715a80f0-cdba-439c-8a82-4838bf8f7e50" (UID: "715a80f0-cdba-439c-8a82-4838bf8f7e50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.825736 4699 scope.go:117] "RemoveContainer" containerID="de9a25314ef41f7d3414b57dcaeec2a9add4d5ecb708b80dc9af27c79856ba9b" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.859766 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "715a80f0-cdba-439c-8a82-4838bf8f7e50" (UID: "715a80f0-cdba-439c-8a82-4838bf8f7e50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.925338 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:54 crc kubenswrapper[4699]: I0226 11:32:54.925600 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715a80f0-cdba-439c-8a82-4838bf8f7e50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.352614 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.434161 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-sg-core-conf-yaml\") pod \"6b07016c-61a8-4b19-8635-4f6475523855\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.434309 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-log-httpd\") pod \"6b07016c-61a8-4b19-8635-4f6475523855\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.434365 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnc2v\" (UniqueName: \"kubernetes.io/projected/6b07016c-61a8-4b19-8635-4f6475523855-kube-api-access-vnc2v\") pod \"6b07016c-61a8-4b19-8635-4f6475523855\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.434391 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-scripts\") pod \"6b07016c-61a8-4b19-8635-4f6475523855\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.434439 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-run-httpd\") pod \"6b07016c-61a8-4b19-8635-4f6475523855\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.434946 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b07016c-61a8-4b19-8635-4f6475523855" (UID: "6b07016c-61a8-4b19-8635-4f6475523855"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.435096 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-config-data\") pod \"6b07016c-61a8-4b19-8635-4f6475523855\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.435200 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-combined-ca-bundle\") pod \"6b07016c-61a8-4b19-8635-4f6475523855\" (UID: \"6b07016c-61a8-4b19-8635-4f6475523855\") " Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.435216 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b07016c-61a8-4b19-8635-4f6475523855" (UID: "6b07016c-61a8-4b19-8635-4f6475523855"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.435798 4699 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.435818 4699 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b07016c-61a8-4b19-8635-4f6475523855-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.438957 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-scripts" (OuterVolumeSpecName: "scripts") pod "6b07016c-61a8-4b19-8635-4f6475523855" (UID: "6b07016c-61a8-4b19-8635-4f6475523855"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.439287 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b07016c-61a8-4b19-8635-4f6475523855-kube-api-access-vnc2v" (OuterVolumeSpecName: "kube-api-access-vnc2v") pod "6b07016c-61a8-4b19-8635-4f6475523855" (UID: "6b07016c-61a8-4b19-8635-4f6475523855"). InnerVolumeSpecName "kube-api-access-vnc2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.461906 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b07016c-61a8-4b19-8635-4f6475523855" (UID: "6b07016c-61a8-4b19-8635-4f6475523855"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.529823 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b07016c-61a8-4b19-8635-4f6475523855" (UID: "6b07016c-61a8-4b19-8635-4f6475523855"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.537506 4699 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.537588 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnc2v\" (UniqueName: \"kubernetes.io/projected/6b07016c-61a8-4b19-8635-4f6475523855-kube-api-access-vnc2v\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.537604 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.537631 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.557600 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-config-data" (OuterVolumeSpecName: "config-data") pod "6b07016c-61a8-4b19-8635-4f6475523855" (UID: "6b07016c-61a8-4b19-8635-4f6475523855"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.636577 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dd795c56-7kv72" event={"ID":"715a80f0-cdba-439c-8a82-4838bf8f7e50","Type":"ContainerDied","Data":"6345d756a7b816036dc69f325dd74145097fc551abbeb710dfcdf0451b76e1c8"} Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.636633 4699 scope.go:117] "RemoveContainer" containerID="79c878075032024e487997f5af9db4e3be830d392f6df8fc08ba5dbf79596db4" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.636647 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59dd795c56-7kv72" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.649087 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b07016c-61a8-4b19-8635-4f6475523855-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.671967 4699 generic.go:334] "Generic (PLEG): container finished" podID="6b07016c-61a8-4b19-8635-4f6475523855" containerID="f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca" exitCode=0 Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.672001 4699 generic.go:334] "Generic (PLEG): container finished" podID="6b07016c-61a8-4b19-8635-4f6475523855" containerID="8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb" exitCode=2 Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.672010 4699 generic.go:334] "Generic (PLEG): container finished" podID="6b07016c-61a8-4b19-8635-4f6475523855" containerID="b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399" exitCode=0 Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.672018 4699 generic.go:334] "Generic (PLEG): container finished" podID="6b07016c-61a8-4b19-8635-4f6475523855" containerID="a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924" exitCode=0 Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.672969 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.673005 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b07016c-61a8-4b19-8635-4f6475523855","Type":"ContainerDied","Data":"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca"} Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.673035 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b07016c-61a8-4b19-8635-4f6475523855","Type":"ContainerDied","Data":"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb"} Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.673047 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b07016c-61a8-4b19-8635-4f6475523855","Type":"ContainerDied","Data":"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399"} Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.673060 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b07016c-61a8-4b19-8635-4f6475523855","Type":"ContainerDied","Data":"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924"} Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.673069 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b07016c-61a8-4b19-8635-4f6475523855","Type":"ContainerDied","Data":"902e07d96029273f87858340fd822c319ca3a6b168bf4b0377c625b530b7ae55"} Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.695313 4699 scope.go:117] "RemoveContainer" containerID="e5405c70871ee395752c1da1df07066a938aedcb1ac422f960283753ab469ce2" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.699878 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-snmfx"] Feb 26 11:32:55 crc kubenswrapper[4699]: E0226 11:32:55.700299 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715a80f0-cdba-439c-8a82-4838bf8f7e50" containerName="neutron-httpd" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.700311 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="715a80f0-cdba-439c-8a82-4838bf8f7e50" containerName="neutron-httpd" Feb 26 11:32:55 crc kubenswrapper[4699]: E0226 11:32:55.700328 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715a80f0-cdba-439c-8a82-4838bf8f7e50" containerName="neutron-api" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.700333 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="715a80f0-cdba-439c-8a82-4838bf8f7e50" containerName="neutron-api" Feb 26 11:32:55 crc kubenswrapper[4699]: E0226 11:32:55.700350 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="ceilometer-notification-agent" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.700356 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="ceilometer-notification-agent" Feb 26 11:32:55 crc kubenswrapper[4699]: E0226 11:32:55.700368 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon-log" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.700374 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon-log" Feb 26 11:32:55 crc kubenswrapper[4699]: E0226 11:32:55.700392 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="sg-core" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.700398 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="sg-core" Feb 26 11:32:55 crc kubenswrapper[4699]: E0226 11:32:55.700407 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.700412 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon" Feb 26 11:32:55 crc kubenswrapper[4699]: E0226 11:32:55.700422 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="ceilometer-central-agent" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.700428 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="ceilometer-central-agent" Feb 26 11:32:55 crc kubenswrapper[4699]: E0226 11:32:55.700446 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="proxy-httpd" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.700453 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="proxy-httpd" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.702084 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon-log" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.702156 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="ceilometer-central-agent" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.702174 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="proxy-httpd" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.702227 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" containerName="horizon" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.702251 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="sg-core" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.702266 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="715a80f0-cdba-439c-8a82-4838bf8f7e50" containerName="neutron-httpd" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.702308 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b07016c-61a8-4b19-8635-4f6475523855" containerName="ceilometer-notification-agent" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.702322 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="715a80f0-cdba-439c-8a82-4838bf8f7e50" containerName="neutron-api" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.704058 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-snmfx" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.711555 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-snmfx"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.727236 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.740873 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.747182 4699 scope.go:117] "RemoveContainer" containerID="f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.791465 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.793498 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.797083 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.797291 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.799173 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59dd795c56-7kv72"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.831524 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59dd795c56-7kv72"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.842428 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.848723 4699 scope.go:117] "RemoveContainer" containerID="8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.852332 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c5acc31-dbe4-4698-8346-9a0dbc05234b-operator-scripts\") pod \"nova-api-db-create-snmfx\" (UID: \"6c5acc31-dbe4-4698-8346-9a0dbc05234b\") " pod="openstack/nova-api-db-create-snmfx" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.852471 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znlcj\" (UniqueName: \"kubernetes.io/projected/6c5acc31-dbe4-4698-8346-9a0dbc05234b-kube-api-access-znlcj\") pod \"nova-api-db-create-snmfx\" (UID: \"6c5acc31-dbe4-4698-8346-9a0dbc05234b\") " pod="openstack/nova-api-db-create-snmfx" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.852954 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-62mhs"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.854304 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-62mhs" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.868973 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-62mhs"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.886167 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3146-account-create-update-xf6c8"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.887475 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3146-account-create-update-xf6c8" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.898277 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3146-account-create-update-xf6c8"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.904743 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.941876 4699 scope.go:117] "RemoveContainer" containerID="b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.955023 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.955088 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-log-httpd\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.955168 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.955201 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89884\" (UniqueName: \"kubernetes.io/projected/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-kube-api-access-89884\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.955220 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e54b257-33a7-43bd-80c5-30915ae82341-operator-scripts\") pod \"nova-cell0-db-create-62mhs\" (UID: \"7e54b257-33a7-43bd-80c5-30915ae82341\") " pod="openstack/nova-cell0-db-create-62mhs" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.955236 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-scripts\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.955274 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcfgb\" (UniqueName: \"kubernetes.io/projected/7e54b257-33a7-43bd-80c5-30915ae82341-kube-api-access-jcfgb\") pod \"nova-cell0-db-create-62mhs\" (UID: \"7e54b257-33a7-43bd-80c5-30915ae82341\") " pod="openstack/nova-cell0-db-create-62mhs" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.955290 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-config-data\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.955310 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-run-httpd\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.955354 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c5acc31-dbe4-4698-8346-9a0dbc05234b-operator-scripts\") pod \"nova-api-db-create-snmfx\" (UID: \"6c5acc31-dbe4-4698-8346-9a0dbc05234b\") " pod="openstack/nova-api-db-create-snmfx" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.955391 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znlcj\" (UniqueName: \"kubernetes.io/projected/6c5acc31-dbe4-4698-8346-9a0dbc05234b-kube-api-access-znlcj\") pod \"nova-api-db-create-snmfx\" (UID: \"6c5acc31-dbe4-4698-8346-9a0dbc05234b\") " pod="openstack/nova-api-db-create-snmfx" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.956723 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c5acc31-dbe4-4698-8346-9a0dbc05234b-operator-scripts\") pod \"nova-api-db-create-snmfx\" (UID: \"6c5acc31-dbe4-4698-8346-9a0dbc05234b\") " pod="openstack/nova-api-db-create-snmfx" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.966259 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-hq69l"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.967761 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hq69l" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.975343 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hq69l"] Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.976073 4699 scope.go:117] "RemoveContainer" containerID="a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924" Feb 26 11:32:55 crc kubenswrapper[4699]: I0226 11:32:55.995233 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znlcj\" (UniqueName: \"kubernetes.io/projected/6c5acc31-dbe4-4698-8346-9a0dbc05234b-kube-api-access-znlcj\") pod \"nova-api-db-create-snmfx\" (UID: \"6c5acc31-dbe4-4698-8346-9a0dbc05234b\") " pod="openstack/nova-api-db-create-snmfx" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.055915 4699 scope.go:117] "RemoveContainer" containerID="f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.056108 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-snmfx" Feb 26 11:32:56 crc kubenswrapper[4699]: E0226 11:32:56.056390 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca\": container with ID starting with f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca not found: ID does not exist" containerID="f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.056428 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca"} err="failed to get container status \"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca\": rpc error: code = NotFound desc = could not find container \"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca\": container with ID starting with f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.056453 4699 scope.go:117] "RemoveContainer" containerID="8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb" Feb 26 11:32:56 crc kubenswrapper[4699]: E0226 11:32:56.056769 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb\": container with ID starting with 8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb not found: ID does not exist" containerID="8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.056801 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb"} err="failed to get container status \"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb\": rpc error: code = NotFound desc = could not find container \"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb\": container with ID starting with 8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.056819 4699 scope.go:117] "RemoveContainer" containerID="b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.056849 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.056894 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bssrg\" (UniqueName: \"kubernetes.io/projected/86425865-434f-43e8-9592-e890078837a2-kube-api-access-bssrg\") pod \"nova-cell1-db-create-hq69l\" (UID: \"86425865-434f-43e8-9592-e890078837a2\") " pod="openstack/nova-cell1-db-create-hq69l" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.056925 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89884\" (UniqueName: \"kubernetes.io/projected/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-kube-api-access-89884\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.056948 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e54b257-33a7-43bd-80c5-30915ae82341-operator-scripts\") pod \"nova-cell0-db-create-62mhs\" (UID: \"7e54b257-33a7-43bd-80c5-30915ae82341\") " pod="openstack/nova-cell0-db-create-62mhs" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.056962 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-scripts\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.057160 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcfgb\" (UniqueName: \"kubernetes.io/projected/7e54b257-33a7-43bd-80c5-30915ae82341-kube-api-access-jcfgb\") pod \"nova-cell0-db-create-62mhs\" (UID: \"7e54b257-33a7-43bd-80c5-30915ae82341\") " pod="openstack/nova-cell0-db-create-62mhs" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.057222 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-config-data\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.057280 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-run-httpd\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.057340 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86425865-434f-43e8-9592-e890078837a2-operator-scripts\") pod \"nova-cell1-db-create-hq69l\" (UID: \"86425865-434f-43e8-9592-e890078837a2\") " pod="openstack/nova-cell1-db-create-hq69l" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.057409 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrl4z\" (UniqueName: \"kubernetes.io/projected/f4a229eb-75a5-41b1-8342-53a3a1b433a0-kube-api-access-mrl4z\") pod \"nova-api-3146-account-create-update-xf6c8\" (UID: \"f4a229eb-75a5-41b1-8342-53a3a1b433a0\") " pod="openstack/nova-api-3146-account-create-update-xf6c8" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.057538 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a229eb-75a5-41b1-8342-53a3a1b433a0-operator-scripts\") pod \"nova-api-3146-account-create-update-xf6c8\" (UID: \"f4a229eb-75a5-41b1-8342-53a3a1b433a0\") " pod="openstack/nova-api-3146-account-create-update-xf6c8" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.057591 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.057674 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-log-httpd\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: E0226 11:32:56.061280 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399\": container with ID starting with b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399 not found: ID does not exist" containerID="b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.061331 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399"} err="failed to get container status \"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399\": rpc error: code = NotFound desc = could not find container \"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399\": container with ID starting with b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399 not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.061360 4699 scope.go:117] "RemoveContainer" containerID="a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.062136 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.062239 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-run-httpd\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.062279 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-log-httpd\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: E0226 11:32:56.062593 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924\": container with ID starting with a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924 not found: ID does not exist" containerID="a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.062641 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924"} err="failed to get container status \"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924\": rpc error: code = NotFound desc = could not find container \"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924\": container with ID starting with a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924 not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.062671 4699 scope.go:117] "RemoveContainer" containerID="f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.062939 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e54b257-33a7-43bd-80c5-30915ae82341-operator-scripts\") pod \"nova-cell0-db-create-62mhs\" (UID: \"7e54b257-33a7-43bd-80c5-30915ae82341\") " pod="openstack/nova-cell0-db-create-62mhs" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.063840 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.066004 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.070966 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca"} err="failed to get container status \"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca\": rpc error: code = NotFound desc = could not find container \"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca\": container with ID starting with f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.071258 4699 scope.go:117] "RemoveContainer" containerID="8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.075136 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb"} err="failed to get container status \"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb\": rpc error: code = NotFound desc = could not find container \"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb\": container with ID starting with 8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.075195 4699 scope.go:117] "RemoveContainer" containerID="b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.075651 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-scripts\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.077519 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.077596 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-66cf-account-create-update-qvvdk"] Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.079434 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.080359 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399"} err="failed to get container status \"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399\": rpc error: code = NotFound desc = could not find container \"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399\": container with ID starting with b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399 not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.080405 4699 scope.go:117] "RemoveContainer" containerID="a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.081458 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924"} err="failed to get container status \"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924\": rpc error: code = NotFound desc = could not find container \"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924\": container with ID starting with a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924 not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.081512 4699 scope.go:117] "RemoveContainer" containerID="f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.082872 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.083382 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca"} err="failed to get container status \"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca\": rpc error: code = NotFound desc = could not find container \"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca\": container with ID starting with f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.083404 4699 scope.go:117] "RemoveContainer" containerID="8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.083953 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb"} err="failed to get container status \"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb\": rpc error: code = NotFound desc = could not find container \"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb\": container with ID starting with 8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.083977 4699 scope.go:117] "RemoveContainer" containerID="b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.084325 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399"} err="failed to get container status \"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399\": rpc error: code = NotFound desc = could not find container \"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399\": container with ID starting with b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399 not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.084343 4699 scope.go:117] "RemoveContainer" containerID="a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.085330 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924"} err="failed to get container status \"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924\": rpc error: code = NotFound desc = could not find container \"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924\": container with ID starting with a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924 not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.085362 4699 scope.go:117] "RemoveContainer" containerID="f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.086040 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcfgb\" (UniqueName: \"kubernetes.io/projected/7e54b257-33a7-43bd-80c5-30915ae82341-kube-api-access-jcfgb\") pod \"nova-cell0-db-create-62mhs\" (UID: \"7e54b257-33a7-43bd-80c5-30915ae82341\") " pod="openstack/nova-cell0-db-create-62mhs" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.086336 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca"} err="failed to get container status \"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca\": rpc error: code = NotFound desc = could not find container \"f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca\": container with ID starting with f7f57995e81f7d57f16c415cdd4fc8b824f88da1357748823087bec1b66775ca not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.086474 4699 scope.go:117] "RemoveContainer" containerID="8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.086827 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb"} err="failed to get container status \"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb\": rpc error: code = NotFound desc = could not find container \"8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb\": container with ID starting with 8a2b3d98e1b247b2e00c05dc8e82dc50d659148596d3e539832fe8084c0ef4fb not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.086973 4699 scope.go:117] "RemoveContainer" containerID="b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.087302 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-config-data\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.088212 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89884\" (UniqueName: \"kubernetes.io/projected/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-kube-api-access-89884\") pod \"ceilometer-0\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.088221 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-66cf-account-create-update-qvvdk"] Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.088524 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399"} err="failed to get container status \"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399\": rpc error: code = NotFound desc = could not find container \"b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399\": container with ID starting with b81ac7705e4e99a6910779d59e3a0a7b4d83c6a14f827e737f193b76841b4399 not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.088705 4699 scope.go:117] "RemoveContainer" containerID="a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.089091 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924"} err="failed to get container status \"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924\": rpc error: code = NotFound desc = could not find container \"a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924\": container with ID starting with a35ff88a75232546cd35dc8ab0d6697d72733b6a1f0634f1f02e398729694924 not found: ID does not exist" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.120212 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.160782 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bssrg\" (UniqueName: \"kubernetes.io/projected/86425865-434f-43e8-9592-e890078837a2-kube-api-access-bssrg\") pod \"nova-cell1-db-create-hq69l\" (UID: \"86425865-434f-43e8-9592-e890078837a2\") " pod="openstack/nova-cell1-db-create-hq69l" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.161057 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86425865-434f-43e8-9592-e890078837a2-operator-scripts\") pod \"nova-cell1-db-create-hq69l\" (UID: \"86425865-434f-43e8-9592-e890078837a2\") " pod="openstack/nova-cell1-db-create-hq69l" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.161175 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrl4z\" (UniqueName: \"kubernetes.io/projected/f4a229eb-75a5-41b1-8342-53a3a1b433a0-kube-api-access-mrl4z\") pod \"nova-api-3146-account-create-update-xf6c8\" (UID: \"f4a229eb-75a5-41b1-8342-53a3a1b433a0\") " pod="openstack/nova-api-3146-account-create-update-xf6c8" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.161261 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a229eb-75a5-41b1-8342-53a3a1b433a0-operator-scripts\") pod \"nova-api-3146-account-create-update-xf6c8\" (UID: \"f4a229eb-75a5-41b1-8342-53a3a1b433a0\") " pod="openstack/nova-api-3146-account-create-update-xf6c8" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.162081 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a229eb-75a5-41b1-8342-53a3a1b433a0-operator-scripts\") pod \"nova-api-3146-account-create-update-xf6c8\" (UID: \"f4a229eb-75a5-41b1-8342-53a3a1b433a0\") " pod="openstack/nova-api-3146-account-create-update-xf6c8" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.163011 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86425865-434f-43e8-9592-e890078837a2-operator-scripts\") pod \"nova-cell1-db-create-hq69l\" (UID: \"86425865-434f-43e8-9592-e890078837a2\") " pod="openstack/nova-cell1-db-create-hq69l" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.180990 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrl4z\" (UniqueName: \"kubernetes.io/projected/f4a229eb-75a5-41b1-8342-53a3a1b433a0-kube-api-access-mrl4z\") pod \"nova-api-3146-account-create-update-xf6c8\" (UID: \"f4a229eb-75a5-41b1-8342-53a3a1b433a0\") " pod="openstack/nova-api-3146-account-create-update-xf6c8" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.184688 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bssrg\" (UniqueName: \"kubernetes.io/projected/86425865-434f-43e8-9592-e890078837a2-kube-api-access-bssrg\") pod \"nova-cell1-db-create-hq69l\" (UID: \"86425865-434f-43e8-9592-e890078837a2\") " pod="openstack/nova-cell1-db-create-hq69l" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.234968 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-62mhs" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.244916 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3146-account-create-update-xf6c8" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.268292 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nl2p\" (UniqueName: \"kubernetes.io/projected/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-kube-api-access-7nl2p\") pod \"nova-cell0-66cf-account-create-update-qvvdk\" (UID: \"f99c6b36-a5f6-4f0b-973f-dfa853d2c558\") " pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.268815 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-operator-scripts\") pod \"nova-cell0-66cf-account-create-update-qvvdk\" (UID: \"f99c6b36-a5f6-4f0b-973f-dfa853d2c558\") " pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.332855 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hq69l" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.353833 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b07016c-61a8-4b19-8635-4f6475523855" path="/var/lib/kubelet/pods/6b07016c-61a8-4b19-8635-4f6475523855/volumes" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.354751 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="715a80f0-cdba-439c-8a82-4838bf8f7e50" path="/var/lib/kubelet/pods/715a80f0-cdba-439c-8a82-4838bf8f7e50/volumes" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.355985 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d85906-b78a-46eb-b5dd-4da95c1222d8" path="/var/lib/kubelet/pods/78d85906-b78a-46eb-b5dd-4da95c1222d8/volumes" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.356825 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-43f0-account-create-update-vgmlz"] Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.358359 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-43f0-account-create-update-vgmlz"] Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.358508 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.360497 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.370978 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-operator-scripts\") pod \"nova-cell0-66cf-account-create-update-qvvdk\" (UID: \"f99c6b36-a5f6-4f0b-973f-dfa853d2c558\") " pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.371204 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nl2p\" (UniqueName: \"kubernetes.io/projected/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-kube-api-access-7nl2p\") pod \"nova-cell0-66cf-account-create-update-qvvdk\" (UID: \"f99c6b36-a5f6-4f0b-973f-dfa853d2c558\") " pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.373846 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-operator-scripts\") pod \"nova-cell0-66cf-account-create-update-qvvdk\" (UID: \"f99c6b36-a5f6-4f0b-973f-dfa853d2c558\") " pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.396047 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nl2p\" (UniqueName: \"kubernetes.io/projected/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-kube-api-access-7nl2p\") pod \"nova-cell0-66cf-account-create-update-qvvdk\" (UID: \"f99c6b36-a5f6-4f0b-973f-dfa853d2c558\") " pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.473632 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgx45\" (UniqueName: \"kubernetes.io/projected/dea40818-89fa-4b78-9833-82635861fee1-kube-api-access-qgx45\") pod \"nova-cell1-43f0-account-create-update-vgmlz\" (UID: \"dea40818-89fa-4b78-9833-82635861fee1\") " pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.473771 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea40818-89fa-4b78-9833-82635861fee1-operator-scripts\") pod \"nova-cell1-43f0-account-create-update-vgmlz\" (UID: \"dea40818-89fa-4b78-9833-82635861fee1\") " pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.496633 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.575663 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgx45\" (UniqueName: \"kubernetes.io/projected/dea40818-89fa-4b78-9833-82635861fee1-kube-api-access-qgx45\") pod \"nova-cell1-43f0-account-create-update-vgmlz\" (UID: \"dea40818-89fa-4b78-9833-82635861fee1\") " pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.576022 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea40818-89fa-4b78-9833-82635861fee1-operator-scripts\") pod \"nova-cell1-43f0-account-create-update-vgmlz\" (UID: \"dea40818-89fa-4b78-9833-82635861fee1\") " pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.577485 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea40818-89fa-4b78-9833-82635861fee1-operator-scripts\") pod \"nova-cell1-43f0-account-create-update-vgmlz\" (UID: \"dea40818-89fa-4b78-9833-82635861fee1\") " pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.599281 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgx45\" (UniqueName: \"kubernetes.io/projected/dea40818-89fa-4b78-9833-82635861fee1-kube-api-access-qgx45\") pod \"nova-cell1-43f0-account-create-update-vgmlz\" (UID: \"dea40818-89fa-4b78-9833-82635861fee1\") " pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.675320 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.728679 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.769411 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af9cf0d-3dcb-4d56-9373-2ec6ea323564","Type":"ContainerStarted","Data":"56fc6b0ffcfa6ade2b63264a18f35f46ed39dca62b34ae50c392d8a43e061b9a"} Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.904707 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-snmfx"] Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.914249 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3146-account-create-update-xf6c8"] Feb 26 11:32:56 crc kubenswrapper[4699]: I0226 11:32:56.982018 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.177500 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-62mhs"] Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.267978 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hq69l"] Feb 26 11:32:57 crc kubenswrapper[4699]: W0226 11:32:57.287917 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86425865_434f_43e8_9592_e890078837a2.slice/crio-cae37a1dd41503ce2668296075b414232a79b4124cf7606abb527f141a12944b WatchSource:0}: Error finding container cae37a1dd41503ce2668296075b414232a79b4124cf7606abb527f141a12944b: Status 404 returned error can't find the container with id cae37a1dd41503ce2668296075b414232a79b4124cf7606abb527f141a12944b Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.334126 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-66cf-account-create-update-qvvdk"] Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.505318 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-43f0-account-create-update-vgmlz"] Feb 26 11:32:57 crc kubenswrapper[4699]: W0226 11:32:57.514337 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddea40818_89fa_4b78_9833_82635861fee1.slice/crio-28d6dd8244e4d61e26f02b09033cd56dadc3f377a4f21e89d14a901d2be9c9d5 WatchSource:0}: Error finding container 28d6dd8244e4d61e26f02b09033cd56dadc3f377a4f21e89d14a901d2be9c9d5: Status 404 returned error can't find the container with id 28d6dd8244e4d61e26f02b09033cd56dadc3f377a4f21e89d14a901d2be9c9d5 Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.805013 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af9cf0d-3dcb-4d56-9373-2ec6ea323564","Type":"ContainerStarted","Data":"5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.808404 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4a229eb-75a5-41b1-8342-53a3a1b433a0" containerID="b2b62d6d79c5c992c3884d7e4c7aa453502b8500701d02db975cc913cb332656" exitCode=0 Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.808473 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3146-account-create-update-xf6c8" event={"ID":"f4a229eb-75a5-41b1-8342-53a3a1b433a0","Type":"ContainerDied","Data":"b2b62d6d79c5c992c3884d7e4c7aa453502b8500701d02db975cc913cb332656"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.808503 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3146-account-create-update-xf6c8" event={"ID":"f4a229eb-75a5-41b1-8342-53a3a1b433a0","Type":"ContainerStarted","Data":"129588bf429da3e10dc74316d2ee44e760af7923cd0ca87e4e69a1941943c196"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.810267 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" event={"ID":"dea40818-89fa-4b78-9833-82635861fee1","Type":"ContainerStarted","Data":"28d6dd8244e4d61e26f02b09033cd56dadc3f377a4f21e89d14a901d2be9c9d5"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.812694 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" event={"ID":"f99c6b36-a5f6-4f0b-973f-dfa853d2c558","Type":"ContainerStarted","Data":"ea224b941b0465af7d8b7b7d5e0297ed56d62f796e3b6566730ce00cb01d16ec"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.812731 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" event={"ID":"f99c6b36-a5f6-4f0b-973f-dfa853d2c558","Type":"ContainerStarted","Data":"2361c68db1b6f7a97eaf5c8da87ffb02f30d64940e48194e78df269546c62761"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.815798 4699 generic.go:334] "Generic (PLEG): container finished" podID="6c5acc31-dbe4-4698-8346-9a0dbc05234b" containerID="0569f07824e60d0703bc892d604ca5230523b1fde72c768bd283ae0d47703780" exitCode=0 Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.815865 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-snmfx" event={"ID":"6c5acc31-dbe4-4698-8346-9a0dbc05234b","Type":"ContainerDied","Data":"0569f07824e60d0703bc892d604ca5230523b1fde72c768bd283ae0d47703780"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.815887 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-snmfx" event={"ID":"6c5acc31-dbe4-4698-8346-9a0dbc05234b","Type":"ContainerStarted","Data":"367226eed7c68edeebb6220e8afa4a67d5e1c5ee276d3ca9afc548c3ebb597e1"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.818562 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hq69l" event={"ID":"86425865-434f-43e8-9592-e890078837a2","Type":"ContainerStarted","Data":"e2f8c469ec04f6028bf261997ea76ce892a579e71cd0b1e3cbda4d1a898468a0"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.818606 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hq69l" event={"ID":"86425865-434f-43e8-9592-e890078837a2","Type":"ContainerStarted","Data":"cae37a1dd41503ce2668296075b414232a79b4124cf7606abb527f141a12944b"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.825471 4699 generic.go:334] "Generic (PLEG): container finished" podID="7e54b257-33a7-43bd-80c5-30915ae82341" containerID="9eff27ca91f87caa5ed2a02975a6d6bc2e239264a6a323e5cbc0471084500265" exitCode=0 Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.825522 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-62mhs" event={"ID":"7e54b257-33a7-43bd-80c5-30915ae82341","Type":"ContainerDied","Data":"9eff27ca91f87caa5ed2a02975a6d6bc2e239264a6a323e5cbc0471084500265"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.825548 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-62mhs" event={"ID":"7e54b257-33a7-43bd-80c5-30915ae82341","Type":"ContainerStarted","Data":"a09cad48346ffd6c0e7b88aa7cf1e96d3627b8eded2b64f26a84ecb47f6b8740"} Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.854833 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-hq69l" podStartSLOduration=2.854813687 podStartE2EDuration="2.854813687s" podCreationTimestamp="2026-02-26 11:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:57.850387779 +0000 UTC m=+1323.661214213" watchObservedRunningTime="2026-02-26 11:32:57.854813687 +0000 UTC m=+1323.665640121" Feb 26 11:32:57 crc kubenswrapper[4699]: I0226 11:32:57.873032 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" podStartSLOduration=1.8730150559999998 podStartE2EDuration="1.873015056s" podCreationTimestamp="2026-02-26 11:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:57.864159909 +0000 UTC m=+1323.674986353" watchObservedRunningTime="2026-02-26 11:32:57.873015056 +0000 UTC m=+1323.683841490" Feb 26 11:32:58 crc kubenswrapper[4699]: I0226 11:32:58.838653 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af9cf0d-3dcb-4d56-9373-2ec6ea323564","Type":"ContainerStarted","Data":"49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1"} Feb 26 11:32:58 crc kubenswrapper[4699]: I0226 11:32:58.840818 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" event={"ID":"dea40818-89fa-4b78-9833-82635861fee1","Type":"ContainerStarted","Data":"853cdd9a99dcd559f8a9a9863c9ecd3351cc72fb23481557abd22c41a3816b2d"} Feb 26 11:32:58 crc kubenswrapper[4699]: I0226 11:32:58.842256 4699 generic.go:334] "Generic (PLEG): container finished" podID="f99c6b36-a5f6-4f0b-973f-dfa853d2c558" containerID="ea224b941b0465af7d8b7b7d5e0297ed56d62f796e3b6566730ce00cb01d16ec" exitCode=0 Feb 26 11:32:58 crc kubenswrapper[4699]: I0226 11:32:58.842341 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" event={"ID":"f99c6b36-a5f6-4f0b-973f-dfa853d2c558","Type":"ContainerDied","Data":"ea224b941b0465af7d8b7b7d5e0297ed56d62f796e3b6566730ce00cb01d16ec"} Feb 26 11:32:58 crc kubenswrapper[4699]: I0226 11:32:58.850747 4699 generic.go:334] "Generic (PLEG): container finished" podID="86425865-434f-43e8-9592-e890078837a2" containerID="e2f8c469ec04f6028bf261997ea76ce892a579e71cd0b1e3cbda4d1a898468a0" exitCode=0 Feb 26 11:32:58 crc kubenswrapper[4699]: I0226 11:32:58.851203 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hq69l" event={"ID":"86425865-434f-43e8-9592-e890078837a2","Type":"ContainerDied","Data":"e2f8c469ec04f6028bf261997ea76ce892a579e71cd0b1e3cbda4d1a898468a0"} Feb 26 11:32:58 crc kubenswrapper[4699]: I0226 11:32:58.870322 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" podStartSLOduration=2.870294618 podStartE2EDuration="2.870294618s" podCreationTimestamp="2026-02-26 11:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:58.858476165 +0000 UTC m=+1324.669302629" watchObservedRunningTime="2026-02-26 11:32:58.870294618 +0000 UTC m=+1324.681121092" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.187187 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.390608 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-62mhs" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.396689 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-snmfx" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.403975 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3146-account-create-update-xf6c8" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.455977 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znlcj\" (UniqueName: \"kubernetes.io/projected/6c5acc31-dbe4-4698-8346-9a0dbc05234b-kube-api-access-znlcj\") pod \"6c5acc31-dbe4-4698-8346-9a0dbc05234b\" (UID: \"6c5acc31-dbe4-4698-8346-9a0dbc05234b\") " Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.456080 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c5acc31-dbe4-4698-8346-9a0dbc05234b-operator-scripts\") pod \"6c5acc31-dbe4-4698-8346-9a0dbc05234b\" (UID: \"6c5acc31-dbe4-4698-8346-9a0dbc05234b\") " Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.456216 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcfgb\" (UniqueName: \"kubernetes.io/projected/7e54b257-33a7-43bd-80c5-30915ae82341-kube-api-access-jcfgb\") pod \"7e54b257-33a7-43bd-80c5-30915ae82341\" (UID: \"7e54b257-33a7-43bd-80c5-30915ae82341\") " Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.456243 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e54b257-33a7-43bd-80c5-30915ae82341-operator-scripts\") pod \"7e54b257-33a7-43bd-80c5-30915ae82341\" (UID: \"7e54b257-33a7-43bd-80c5-30915ae82341\") " Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.456944 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5acc31-dbe4-4698-8346-9a0dbc05234b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c5acc31-dbe4-4698-8346-9a0dbc05234b" (UID: "6c5acc31-dbe4-4698-8346-9a0dbc05234b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.457075 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e54b257-33a7-43bd-80c5-30915ae82341-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e54b257-33a7-43bd-80c5-30915ae82341" (UID: "7e54b257-33a7-43bd-80c5-30915ae82341"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.462931 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5acc31-dbe4-4698-8346-9a0dbc05234b-kube-api-access-znlcj" (OuterVolumeSpecName: "kube-api-access-znlcj") pod "6c5acc31-dbe4-4698-8346-9a0dbc05234b" (UID: "6c5acc31-dbe4-4698-8346-9a0dbc05234b"). InnerVolumeSpecName "kube-api-access-znlcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.463381 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e54b257-33a7-43bd-80c5-30915ae82341-kube-api-access-jcfgb" (OuterVolumeSpecName: "kube-api-access-jcfgb") pod "7e54b257-33a7-43bd-80c5-30915ae82341" (UID: "7e54b257-33a7-43bd-80c5-30915ae82341"). InnerVolumeSpecName "kube-api-access-jcfgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.557897 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a229eb-75a5-41b1-8342-53a3a1b433a0-operator-scripts\") pod \"f4a229eb-75a5-41b1-8342-53a3a1b433a0\" (UID: \"f4a229eb-75a5-41b1-8342-53a3a1b433a0\") " Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.558297 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrl4z\" (UniqueName: \"kubernetes.io/projected/f4a229eb-75a5-41b1-8342-53a3a1b433a0-kube-api-access-mrl4z\") pod \"f4a229eb-75a5-41b1-8342-53a3a1b433a0\" (UID: \"f4a229eb-75a5-41b1-8342-53a3a1b433a0\") " Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.559018 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znlcj\" (UniqueName: \"kubernetes.io/projected/6c5acc31-dbe4-4698-8346-9a0dbc05234b-kube-api-access-znlcj\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.559042 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c5acc31-dbe4-4698-8346-9a0dbc05234b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.559054 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcfgb\" (UniqueName: \"kubernetes.io/projected/7e54b257-33a7-43bd-80c5-30915ae82341-kube-api-access-jcfgb\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.559065 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e54b257-33a7-43bd-80c5-30915ae82341-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.560080 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a229eb-75a5-41b1-8342-53a3a1b433a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4a229eb-75a5-41b1-8342-53a3a1b433a0" (UID: "f4a229eb-75a5-41b1-8342-53a3a1b433a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.567619 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a229eb-75a5-41b1-8342-53a3a1b433a0-kube-api-access-mrl4z" (OuterVolumeSpecName: "kube-api-access-mrl4z") pod "f4a229eb-75a5-41b1-8342-53a3a1b433a0" (UID: "f4a229eb-75a5-41b1-8342-53a3a1b433a0"). InnerVolumeSpecName "kube-api-access-mrl4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.660683 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a229eb-75a5-41b1-8342-53a3a1b433a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.660724 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrl4z\" (UniqueName: \"kubernetes.io/projected/f4a229eb-75a5-41b1-8342-53a3a1b433a0-kube-api-access-mrl4z\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.861046 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-62mhs" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.861057 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-62mhs" event={"ID":"7e54b257-33a7-43bd-80c5-30915ae82341","Type":"ContainerDied","Data":"a09cad48346ffd6c0e7b88aa7cf1e96d3627b8eded2b64f26a84ecb47f6b8740"} Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.861100 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a09cad48346ffd6c0e7b88aa7cf1e96d3627b8eded2b64f26a84ecb47f6b8740" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.862797 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3146-account-create-update-xf6c8" event={"ID":"f4a229eb-75a5-41b1-8342-53a3a1b433a0","Type":"ContainerDied","Data":"129588bf429da3e10dc74316d2ee44e760af7923cd0ca87e4e69a1941943c196"} Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.862830 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="129588bf429da3e10dc74316d2ee44e760af7923cd0ca87e4e69a1941943c196" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.862896 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3146-account-create-update-xf6c8" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.866033 4699 generic.go:334] "Generic (PLEG): container finished" podID="dea40818-89fa-4b78-9833-82635861fee1" containerID="853cdd9a99dcd559f8a9a9863c9ecd3351cc72fb23481557abd22c41a3816b2d" exitCode=0 Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.866092 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" event={"ID":"dea40818-89fa-4b78-9833-82635861fee1","Type":"ContainerDied","Data":"853cdd9a99dcd559f8a9a9863c9ecd3351cc72fb23481557abd22c41a3816b2d"} Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.868743 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-snmfx" Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.868789 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-snmfx" event={"ID":"6c5acc31-dbe4-4698-8346-9a0dbc05234b","Type":"ContainerDied","Data":"367226eed7c68edeebb6220e8afa4a67d5e1c5ee276d3ca9afc548c3ebb597e1"} Feb 26 11:32:59 crc kubenswrapper[4699]: I0226 11:32:59.868815 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="367226eed7c68edeebb6220e8afa4a67d5e1c5ee276d3ca9afc548c3ebb597e1" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.299842 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hq69l" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.345487 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.381490 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bssrg\" (UniqueName: \"kubernetes.io/projected/86425865-434f-43e8-9592-e890078837a2-kube-api-access-bssrg\") pod \"86425865-434f-43e8-9592-e890078837a2\" (UID: \"86425865-434f-43e8-9592-e890078837a2\") " Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.381681 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86425865-434f-43e8-9592-e890078837a2-operator-scripts\") pod \"86425865-434f-43e8-9592-e890078837a2\" (UID: \"86425865-434f-43e8-9592-e890078837a2\") " Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.383356 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86425865-434f-43e8-9592-e890078837a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86425865-434f-43e8-9592-e890078837a2" (UID: "86425865-434f-43e8-9592-e890078837a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.386281 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86425865-434f-43e8-9592-e890078837a2-kube-api-access-bssrg" (OuterVolumeSpecName: "kube-api-access-bssrg") pod "86425865-434f-43e8-9592-e890078837a2" (UID: "86425865-434f-43e8-9592-e890078837a2"). InnerVolumeSpecName "kube-api-access-bssrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.494740 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-operator-scripts\") pod \"f99c6b36-a5f6-4f0b-973f-dfa853d2c558\" (UID: \"f99c6b36-a5f6-4f0b-973f-dfa853d2c558\") " Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.495088 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nl2p\" (UniqueName: \"kubernetes.io/projected/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-kube-api-access-7nl2p\") pod \"f99c6b36-a5f6-4f0b-973f-dfa853d2c558\" (UID: \"f99c6b36-a5f6-4f0b-973f-dfa853d2c558\") " Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.495671 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86425865-434f-43e8-9592-e890078837a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.495701 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bssrg\" (UniqueName: \"kubernetes.io/projected/86425865-434f-43e8-9592-e890078837a2-kube-api-access-bssrg\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.501187 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f99c6b36-a5f6-4f0b-973f-dfa853d2c558" (UID: "f99c6b36-a5f6-4f0b-973f-dfa853d2c558"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.504318 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-kube-api-access-7nl2p" (OuterVolumeSpecName: "kube-api-access-7nl2p") pod "f99c6b36-a5f6-4f0b-973f-dfa853d2c558" (UID: "f99c6b36-a5f6-4f0b-973f-dfa853d2c558"). InnerVolumeSpecName "kube-api-access-7nl2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.597102 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.597156 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nl2p\" (UniqueName: \"kubernetes.io/projected/f99c6b36-a5f6-4f0b-973f-dfa853d2c558-kube-api-access-7nl2p\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.878906 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af9cf0d-3dcb-4d56-9373-2ec6ea323564","Type":"ContainerStarted","Data":"640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921"} Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.881628 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.881676 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-66cf-account-create-update-qvvdk" event={"ID":"f99c6b36-a5f6-4f0b-973f-dfa853d2c558","Type":"ContainerDied","Data":"2361c68db1b6f7a97eaf5c8da87ffb02f30d64940e48194e78df269546c62761"} Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.881707 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2361c68db1b6f7a97eaf5c8da87ffb02f30d64940e48194e78df269546c62761" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.883180 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hq69l" Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.883202 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hq69l" event={"ID":"86425865-434f-43e8-9592-e890078837a2","Type":"ContainerDied","Data":"cae37a1dd41503ce2668296075b414232a79b4124cf7606abb527f141a12944b"} Feb 26 11:33:00 crc kubenswrapper[4699]: I0226 11:33:00.883255 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cae37a1dd41503ce2668296075b414232a79b4124cf7606abb527f141a12944b" Feb 26 11:33:01 crc kubenswrapper[4699]: I0226 11:33:01.236744 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" Feb 26 11:33:01 crc kubenswrapper[4699]: I0226 11:33:01.309873 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea40818-89fa-4b78-9833-82635861fee1-operator-scripts\") pod \"dea40818-89fa-4b78-9833-82635861fee1\" (UID: \"dea40818-89fa-4b78-9833-82635861fee1\") " Feb 26 11:33:01 crc kubenswrapper[4699]: I0226 11:33:01.309953 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgx45\" (UniqueName: \"kubernetes.io/projected/dea40818-89fa-4b78-9833-82635861fee1-kube-api-access-qgx45\") pod \"dea40818-89fa-4b78-9833-82635861fee1\" (UID: \"dea40818-89fa-4b78-9833-82635861fee1\") " Feb 26 11:33:01 crc kubenswrapper[4699]: I0226 11:33:01.310961 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea40818-89fa-4b78-9833-82635861fee1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dea40818-89fa-4b78-9833-82635861fee1" (UID: "dea40818-89fa-4b78-9833-82635861fee1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:33:01 crc kubenswrapper[4699]: I0226 11:33:01.314958 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea40818-89fa-4b78-9833-82635861fee1-kube-api-access-qgx45" (OuterVolumeSpecName: "kube-api-access-qgx45") pod "dea40818-89fa-4b78-9833-82635861fee1" (UID: "dea40818-89fa-4b78-9833-82635861fee1"). InnerVolumeSpecName "kube-api-access-qgx45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:01 crc kubenswrapper[4699]: I0226 11:33:01.411874 4699 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea40818-89fa-4b78-9833-82635861fee1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:01 crc kubenswrapper[4699]: I0226 11:33:01.411912 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgx45\" (UniqueName: \"kubernetes.io/projected/dea40818-89fa-4b78-9833-82635861fee1-kube-api-access-qgx45\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:01 crc kubenswrapper[4699]: I0226 11:33:01.892608 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" event={"ID":"dea40818-89fa-4b78-9833-82635861fee1","Type":"ContainerDied","Data":"28d6dd8244e4d61e26f02b09033cd56dadc3f377a4f21e89d14a901d2be9c9d5"} Feb 26 11:33:01 crc kubenswrapper[4699]: I0226 11:33:01.892663 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28d6dd8244e4d61e26f02b09033cd56dadc3f377a4f21e89d14a901d2be9c9d5" Feb 26 11:33:01 crc kubenswrapper[4699]: I0226 11:33:01.892727 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43f0-account-create-update-vgmlz" Feb 26 11:33:02 crc kubenswrapper[4699]: I0226 11:33:02.903872 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af9cf0d-3dcb-4d56-9373-2ec6ea323564","Type":"ContainerStarted","Data":"122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf"} Feb 26 11:33:02 crc kubenswrapper[4699]: I0226 11:33:02.904295 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:33:02 crc kubenswrapper[4699]: I0226 11:33:02.904064 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="proxy-httpd" containerID="cri-o://122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf" gracePeriod=30 Feb 26 11:33:02 crc kubenswrapper[4699]: I0226 11:33:02.904022 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="ceilometer-central-agent" containerID="cri-o://5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805" gracePeriod=30 Feb 26 11:33:02 crc kubenswrapper[4699]: I0226 11:33:02.904080 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="sg-core" containerID="cri-o://640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921" gracePeriod=30 Feb 26 11:33:02 crc kubenswrapper[4699]: I0226 11:33:02.904092 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="ceilometer-notification-agent" containerID="cri-o://49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1" gracePeriod=30 Feb 26 11:33:02 crc kubenswrapper[4699]: I0226 11:33:02.929934 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.328763021 podStartE2EDuration="7.929907274s" podCreationTimestamp="2026-02-26 11:32:55 +0000 UTC" firstStartedPulling="2026-02-26 11:32:56.681566655 +0000 UTC m=+1322.492393099" lastFinishedPulling="2026-02-26 11:33:02.282710918 +0000 UTC m=+1328.093537352" observedRunningTime="2026-02-26 11:33:02.922571841 +0000 UTC m=+1328.733398275" watchObservedRunningTime="2026-02-26 11:33:02.929907274 +0000 UTC m=+1328.740733708" Feb 26 11:33:03 crc kubenswrapper[4699]: I0226 11:33:03.917456 4699 generic.go:334] "Generic (PLEG): container finished" podID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerID="122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf" exitCode=0 Feb 26 11:33:03 crc kubenswrapper[4699]: I0226 11:33:03.917490 4699 generic.go:334] "Generic (PLEG): container finished" podID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerID="640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921" exitCode=2 Feb 26 11:33:03 crc kubenswrapper[4699]: I0226 11:33:03.917499 4699 generic.go:334] "Generic (PLEG): container finished" podID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerID="49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1" exitCode=0 Feb 26 11:33:03 crc kubenswrapper[4699]: I0226 11:33:03.917523 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af9cf0d-3dcb-4d56-9373-2ec6ea323564","Type":"ContainerDied","Data":"122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf"} Feb 26 11:33:03 crc kubenswrapper[4699]: I0226 11:33:03.917558 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af9cf0d-3dcb-4d56-9373-2ec6ea323564","Type":"ContainerDied","Data":"640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921"} Feb 26 11:33:03 crc kubenswrapper[4699]: I0226 11:33:03.917574 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af9cf0d-3dcb-4d56-9373-2ec6ea323564","Type":"ContainerDied","Data":"49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1"} Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.612152 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.689376 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-config-data\") pod \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.689440 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-run-httpd\") pod \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.689495 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-sg-core-conf-yaml\") pod \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.689536 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-combined-ca-bundle\") pod \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.689562 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89884\" (UniqueName: \"kubernetes.io/projected/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-kube-api-access-89884\") pod \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.689621 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-scripts\") pod \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.689964 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-log-httpd\") pod \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\" (UID: \"3af9cf0d-3dcb-4d56-9373-2ec6ea323564\") " Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.691082 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3af9cf0d-3dcb-4d56-9373-2ec6ea323564" (UID: "3af9cf0d-3dcb-4d56-9373-2ec6ea323564"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.692407 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3af9cf0d-3dcb-4d56-9373-2ec6ea323564" (UID: "3af9cf0d-3dcb-4d56-9373-2ec6ea323564"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.698989 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-kube-api-access-89884" (OuterVolumeSpecName: "kube-api-access-89884") pod "3af9cf0d-3dcb-4d56-9373-2ec6ea323564" (UID: "3af9cf0d-3dcb-4d56-9373-2ec6ea323564"). InnerVolumeSpecName "kube-api-access-89884". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.704535 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-scripts" (OuterVolumeSpecName: "scripts") pod "3af9cf0d-3dcb-4d56-9373-2ec6ea323564" (UID: "3af9cf0d-3dcb-4d56-9373-2ec6ea323564"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.721372 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3af9cf0d-3dcb-4d56-9373-2ec6ea323564" (UID: "3af9cf0d-3dcb-4d56-9373-2ec6ea323564"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.767024 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3af9cf0d-3dcb-4d56-9373-2ec6ea323564" (UID: "3af9cf0d-3dcb-4d56-9373-2ec6ea323564"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.791953 4699 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.791982 4699 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.791993 4699 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.792003 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.792013 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89884\" (UniqueName: \"kubernetes.io/projected/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-kube-api-access-89884\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.792023 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.795425 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-config-data" (OuterVolumeSpecName: "config-data") pod "3af9cf0d-3dcb-4d56-9373-2ec6ea323564" (UID: "3af9cf0d-3dcb-4d56-9373-2ec6ea323564"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.893459 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af9cf0d-3dcb-4d56-9373-2ec6ea323564-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.941096 4699 generic.go:334] "Generic (PLEG): container finished" podID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerID="5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805" exitCode=0 Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.941170 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af9cf0d-3dcb-4d56-9373-2ec6ea323564","Type":"ContainerDied","Data":"5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805"} Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.941203 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af9cf0d-3dcb-4d56-9373-2ec6ea323564","Type":"ContainerDied","Data":"56fc6b0ffcfa6ade2b63264a18f35f46ed39dca62b34ae50c392d8a43e061b9a"} Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.941221 4699 scope.go:117] "RemoveContainer" containerID="122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.941381 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.988080 4699 scope.go:117] "RemoveContainer" containerID="640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921" Feb 26 11:33:05 crc kubenswrapper[4699]: I0226 11:33:05.991569 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.013474 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.025898 4699 scope.go:117] "RemoveContainer" containerID="49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.026376 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.026838 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="proxy-httpd" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.026858 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="proxy-httpd" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.026866 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99c6b36-a5f6-4f0b-973f-dfa853d2c558" containerName="mariadb-account-create-update" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.026874 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99c6b36-a5f6-4f0b-973f-dfa853d2c558" containerName="mariadb-account-create-update" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.026895 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="ceilometer-central-agent" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.026902 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="ceilometer-central-agent" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.026913 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="ceilometer-notification-agent" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.026920 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="ceilometer-notification-agent" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.026935 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e54b257-33a7-43bd-80c5-30915ae82341" containerName="mariadb-database-create" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.026942 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e54b257-33a7-43bd-80c5-30915ae82341" containerName="mariadb-database-create" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.026969 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea40818-89fa-4b78-9833-82635861fee1" containerName="mariadb-account-create-update" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.026976 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea40818-89fa-4b78-9833-82635861fee1" containerName="mariadb-account-create-update" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.026992 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5acc31-dbe4-4698-8346-9a0dbc05234b" containerName="mariadb-database-create" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.026999 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5acc31-dbe4-4698-8346-9a0dbc05234b" containerName="mariadb-database-create" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.027012 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a229eb-75a5-41b1-8342-53a3a1b433a0" containerName="mariadb-account-create-update" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027018 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a229eb-75a5-41b1-8342-53a3a1b433a0" containerName="mariadb-account-create-update" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.027031 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86425865-434f-43e8-9592-e890078837a2" containerName="mariadb-database-create" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027037 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="86425865-434f-43e8-9592-e890078837a2" containerName="mariadb-database-create" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.027049 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="sg-core" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027056 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="sg-core" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027292 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e54b257-33a7-43bd-80c5-30915ae82341" containerName="mariadb-database-create" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027310 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="ceilometer-notification-agent" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027322 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a229eb-75a5-41b1-8342-53a3a1b433a0" containerName="mariadb-account-create-update" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027329 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="proxy-httpd" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027340 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f99c6b36-a5f6-4f0b-973f-dfa853d2c558" containerName="mariadb-account-create-update" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027352 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="sg-core" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027361 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" containerName="ceilometer-central-agent" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027374 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea40818-89fa-4b78-9833-82635861fee1" containerName="mariadb-account-create-update" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027385 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="86425865-434f-43e8-9592-e890078837a2" containerName="mariadb-database-create" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.027394 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c5acc31-dbe4-4698-8346-9a0dbc05234b" containerName="mariadb-database-create" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.029347 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.038071 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.056250 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.056289 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.077828 4699 scope.go:117] "RemoveContainer" containerID="5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.097246 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-scripts\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.097300 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snlzj\" (UniqueName: \"kubernetes.io/projected/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-kube-api-access-snlzj\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.097354 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-run-httpd\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.097372 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.097392 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.097436 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-config-data\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.097488 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-log-httpd\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.120475 4699 scope.go:117] "RemoveContainer" containerID="122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.120929 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf\": container with ID starting with 122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf not found: ID does not exist" containerID="122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.121072 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf"} err="failed to get container status \"122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf\": rpc error: code = NotFound desc = could not find container \"122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf\": container with ID starting with 122b3010bd28520fa64679c76c74e7f10dc4aef71ddb5d070e468d1dfb7dddcf not found: ID does not exist" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.121215 4699 scope.go:117] "RemoveContainer" containerID="640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.121601 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921\": container with ID starting with 640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921 not found: ID does not exist" containerID="640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.121694 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921"} err="failed to get container status \"640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921\": rpc error: code = NotFound desc = could not find container \"640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921\": container with ID starting with 640ce3baeea3e4b12efb1f4f719021e9fcae968de1b1812fed4c585f37198921 not found: ID does not exist" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.121774 4699 scope.go:117] "RemoveContainer" containerID="49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.122182 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1\": container with ID starting with 49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1 not found: ID does not exist" containerID="49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.122206 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1"} err="failed to get container status \"49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1\": rpc error: code = NotFound desc = could not find container \"49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1\": container with ID starting with 49f60262b1f3a898a9823898820585f9f8010c300b1f1a4ba7911b0817efb1e1 not found: ID does not exist" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.122221 4699 scope.go:117] "RemoveContainer" containerID="5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805" Feb 26 11:33:06 crc kubenswrapper[4699]: E0226 11:33:06.122501 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805\": container with ID starting with 5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805 not found: ID does not exist" containerID="5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.122590 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805"} err="failed to get container status \"5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805\": rpc error: code = NotFound desc = could not find container \"5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805\": container with ID starting with 5676ecd8fae29c4c55da186cdff47b53a9f38f95d3b5e21ebca1ea44099ac805 not found: ID does not exist" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.199094 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-scripts\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.199163 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snlzj\" (UniqueName: \"kubernetes.io/projected/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-kube-api-access-snlzj\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.199224 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-run-httpd\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.199260 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.199280 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.199316 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-config-data\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.199346 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-log-httpd\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.199775 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-log-httpd\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.200394 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-run-httpd\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.202984 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.203296 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-scripts\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.203963 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.204319 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-config-data\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.220508 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snlzj\" (UniqueName: \"kubernetes.io/projected/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-kube-api-access-snlzj\") pod \"ceilometer-0\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.272218 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af9cf0d-3dcb-4d56-9373-2ec6ea323564" path="/var/lib/kubelet/pods/3af9cf0d-3dcb-4d56-9373-2ec6ea323564/volumes" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.379584 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.531881 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vx5jv"] Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.533607 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.535274 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.535801 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.536134 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wwbkn" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.543335 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vx5jv"] Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.607800 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-scripts\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.607844 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nmzl\" (UniqueName: \"kubernetes.io/projected/ef20f352-fa9c-4bc8-875d-d537f00f75d5-kube-api-access-7nmzl\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.607893 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.608203 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-config-data\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.711818 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-config-data\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.711975 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-scripts\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.712015 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nmzl\" (UniqueName: \"kubernetes.io/projected/ef20f352-fa9c-4bc8-875d-d537f00f75d5-kube-api-access-7nmzl\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.712077 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.718694 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-scripts\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.718906 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.718977 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-config-data\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.730909 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nmzl\" (UniqueName: \"kubernetes.io/projected/ef20f352-fa9c-4bc8-875d-d537f00f75d5-kube-api-access-7nmzl\") pod \"nova-cell0-conductor-db-sync-vx5jv\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.859429 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:06 crc kubenswrapper[4699]: I0226 11:33:06.937473 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:07 crc kubenswrapper[4699]: I0226 11:33:07.345083 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vx5jv"] Feb 26 11:33:07 crc kubenswrapper[4699]: W0226 11:33:07.352968 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef20f352_fa9c_4bc8_875d_d537f00f75d5.slice/crio-eb24292f1cef0da184d4ae6d21a79311c33325b5ebc6fce8d6d4a49689b83c5c WatchSource:0}: Error finding container eb24292f1cef0da184d4ae6d21a79311c33325b5ebc6fce8d6d4a49689b83c5c: Status 404 returned error can't find the container with id eb24292f1cef0da184d4ae6d21a79311c33325b5ebc6fce8d6d4a49689b83c5c Feb 26 11:33:07 crc kubenswrapper[4699]: I0226 11:33:07.991466 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"208a51e1-6d1d-4dc4-be5e-fa414dd87c53","Type":"ContainerStarted","Data":"613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc"} Feb 26 11:33:07 crc kubenswrapper[4699]: I0226 11:33:07.992242 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"208a51e1-6d1d-4dc4-be5e-fa414dd87c53","Type":"ContainerStarted","Data":"f5db747a1df1457a54dc26f8dc7732ec37162001229405d6e3c5d95928877c82"} Feb 26 11:33:07 crc kubenswrapper[4699]: I0226 11:33:07.992552 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vx5jv" event={"ID":"ef20f352-fa9c-4bc8-875d-d537f00f75d5","Type":"ContainerStarted","Data":"eb24292f1cef0da184d4ae6d21a79311c33325b5ebc6fce8d6d4a49689b83c5c"} Feb 26 11:33:09 crc kubenswrapper[4699]: I0226 11:33:09.488248 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"208a51e1-6d1d-4dc4-be5e-fa414dd87c53","Type":"ContainerStarted","Data":"69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e"} Feb 26 11:33:10 crc kubenswrapper[4699]: I0226 11:33:10.499040 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"208a51e1-6d1d-4dc4-be5e-fa414dd87c53","Type":"ContainerStarted","Data":"2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db"} Feb 26 11:33:10 crc kubenswrapper[4699]: I0226 11:33:10.600010 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:10 crc kubenswrapper[4699]: I0226 11:33:10.600285 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d42e724c-224e-4c68-b5b4-b72d72d4ded8" containerName="glance-log" containerID="cri-o://6389553c6ab212c6bdd09de56f8a6c0bc3ab110475816ef41318a3d8e60aa198" gracePeriod=30 Feb 26 11:33:10 crc kubenswrapper[4699]: I0226 11:33:10.600423 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d42e724c-224e-4c68-b5b4-b72d72d4ded8" containerName="glance-httpd" containerID="cri-o://178382a3de582f32c10d899adfcff30626331736ab73b2469c8ca1b37fab0c4c" gracePeriod=30 Feb 26 11:33:11 crc kubenswrapper[4699]: I0226 11:33:11.424064 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:33:11 crc kubenswrapper[4699]: I0226 11:33:11.424579 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="03f1bc3b-c587-4c47-bbc2-3dca2240d30c" containerName="glance-httpd" containerID="cri-o://9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2" gracePeriod=30 Feb 26 11:33:11 crc kubenswrapper[4699]: I0226 11:33:11.424839 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="03f1bc3b-c587-4c47-bbc2-3dca2240d30c" containerName="glance-log" containerID="cri-o://96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558" gracePeriod=30 Feb 26 11:33:11 crc kubenswrapper[4699]: I0226 11:33:11.509481 4699 generic.go:334] "Generic (PLEG): container finished" podID="d42e724c-224e-4c68-b5b4-b72d72d4ded8" containerID="6389553c6ab212c6bdd09de56f8a6c0bc3ab110475816ef41318a3d8e60aa198" exitCode=143 Feb 26 11:33:11 crc kubenswrapper[4699]: I0226 11:33:11.509636 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d42e724c-224e-4c68-b5b4-b72d72d4ded8","Type":"ContainerDied","Data":"6389553c6ab212c6bdd09de56f8a6c0bc3ab110475816ef41318a3d8e60aa198"} Feb 26 11:33:11 crc kubenswrapper[4699]: I0226 11:33:11.584466 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:33:11 crc kubenswrapper[4699]: I0226 11:33:11.585017 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:33:12 crc kubenswrapper[4699]: I0226 11:33:12.512756 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:12 crc kubenswrapper[4699]: I0226 11:33:12.524574 4699 generic.go:334] "Generic (PLEG): container finished" podID="03f1bc3b-c587-4c47-bbc2-3dca2240d30c" containerID="96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558" exitCode=143 Feb 26 11:33:12 crc kubenswrapper[4699]: I0226 11:33:12.524629 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03f1bc3b-c587-4c47-bbc2-3dca2240d30c","Type":"ContainerDied","Data":"96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558"} Feb 26 11:33:14 crc kubenswrapper[4699]: I0226 11:33:14.543662 4699 generic.go:334] "Generic (PLEG): container finished" podID="d42e724c-224e-4c68-b5b4-b72d72d4ded8" containerID="178382a3de582f32c10d899adfcff30626331736ab73b2469c8ca1b37fab0c4c" exitCode=0 Feb 26 11:33:14 crc kubenswrapper[4699]: I0226 11:33:14.543733 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d42e724c-224e-4c68-b5b4-b72d72d4ded8","Type":"ContainerDied","Data":"178382a3de582f32c10d899adfcff30626331736ab73b2469c8ca1b37fab0c4c"} Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.285186 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.374649 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-httpd-run\") pod \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.374738 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.374817 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-combined-ca-bundle\") pod \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.374933 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-public-tls-certs\") pod \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.374962 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q7ml\" (UniqueName: \"kubernetes.io/projected/d42e724c-224e-4c68-b5b4-b72d72d4ded8-kube-api-access-6q7ml\") pod \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.374980 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-scripts\") pod \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.375019 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-logs\") pod \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.375034 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-config-data\") pod \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\" (UID: \"d42e724c-224e-4c68-b5b4-b72d72d4ded8\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.375186 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d42e724c-224e-4c68-b5b4-b72d72d4ded8" (UID: "d42e724c-224e-4c68-b5b4-b72d72d4ded8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.375442 4699 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.377560 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-logs" (OuterVolumeSpecName: "logs") pod "d42e724c-224e-4c68-b5b4-b72d72d4ded8" (UID: "d42e724c-224e-4c68-b5b4-b72d72d4ded8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.385271 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "d42e724c-224e-4c68-b5b4-b72d72d4ded8" (UID: "d42e724c-224e-4c68-b5b4-b72d72d4ded8"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.385306 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-scripts" (OuterVolumeSpecName: "scripts") pod "d42e724c-224e-4c68-b5b4-b72d72d4ded8" (UID: "d42e724c-224e-4c68-b5b4-b72d72d4ded8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.404251 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42e724c-224e-4c68-b5b4-b72d72d4ded8-kube-api-access-6q7ml" (OuterVolumeSpecName: "kube-api-access-6q7ml") pod "d42e724c-224e-4c68-b5b4-b72d72d4ded8" (UID: "d42e724c-224e-4c68-b5b4-b72d72d4ded8"). InnerVolumeSpecName "kube-api-access-6q7ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.406354 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.443548 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d42e724c-224e-4c68-b5b4-b72d72d4ded8" (UID: "d42e724c-224e-4c68-b5b4-b72d72d4ded8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.468933 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-config-data" (OuterVolumeSpecName: "config-data") pod "d42e724c-224e-4c68-b5b4-b72d72d4ded8" (UID: "d42e724c-224e-4c68-b5b4-b72d72d4ded8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.476108 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-internal-tls-certs\") pod \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.476181 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-logs\") pod \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.476227 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c2rv\" (UniqueName: \"kubernetes.io/projected/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-kube-api-access-7c2rv\") pod \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.476249 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.476284 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-scripts\") pod \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.476360 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-combined-ca-bundle\") pod \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.476485 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-config-data\") pod \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.476535 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-httpd-run\") pod \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\" (UID: \"03f1bc3b-c587-4c47-bbc2-3dca2240d30c\") " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.477956 4699 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.477992 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.478006 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q7ml\" (UniqueName: \"kubernetes.io/projected/d42e724c-224e-4c68-b5b4-b72d72d4ded8-kube-api-access-6q7ml\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.478017 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.478028 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d42e724c-224e-4c68-b5b4-b72d72d4ded8-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.478038 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.478306 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-logs" (OuterVolumeSpecName: "logs") pod "03f1bc3b-c587-4c47-bbc2-3dca2240d30c" (UID: "03f1bc3b-c587-4c47-bbc2-3dca2240d30c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.478696 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "03f1bc3b-c587-4c47-bbc2-3dca2240d30c" (UID: "03f1bc3b-c587-4c47-bbc2-3dca2240d30c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.486421 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d42e724c-224e-4c68-b5b4-b72d72d4ded8" (UID: "d42e724c-224e-4c68-b5b4-b72d72d4ded8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.488479 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "03f1bc3b-c587-4c47-bbc2-3dca2240d30c" (UID: "03f1bc3b-c587-4c47-bbc2-3dca2240d30c"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.489262 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-scripts" (OuterVolumeSpecName: "scripts") pod "03f1bc3b-c587-4c47-bbc2-3dca2240d30c" (UID: "03f1bc3b-c587-4c47-bbc2-3dca2240d30c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.507471 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-kube-api-access-7c2rv" (OuterVolumeSpecName: "kube-api-access-7c2rv") pod "03f1bc3b-c587-4c47-bbc2-3dca2240d30c" (UID: "03f1bc3b-c587-4c47-bbc2-3dca2240d30c"). InnerVolumeSpecName "kube-api-access-7c2rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.525909 4699 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.560984 4699 generic.go:334] "Generic (PLEG): container finished" podID="03f1bc3b-c587-4c47-bbc2-3dca2240d30c" containerID="9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2" exitCode=0 Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.561041 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03f1bc3b-c587-4c47-bbc2-3dca2240d30c","Type":"ContainerDied","Data":"9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2"} Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.561067 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03f1bc3b-c587-4c47-bbc2-3dca2240d30c","Type":"ContainerDied","Data":"1807142ca5ad23a6f297805a63bd9002973ffc620dfe243a1b7bd07573b9a98e"} Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.561084 4699 scope.go:117] "RemoveContainer" containerID="9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.561265 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.562965 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "03f1bc3b-c587-4c47-bbc2-3dca2240d30c" (UID: "03f1bc3b-c587-4c47-bbc2-3dca2240d30c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.563822 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d42e724c-224e-4c68-b5b4-b72d72d4ded8","Type":"ContainerDied","Data":"93414ab02ed9ca4e817beb6280ab1441d20975697c632df8c1a82aa6fe45a0b0"} Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.563898 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.578614 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="ceilometer-central-agent" containerID="cri-o://613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc" gracePeriod=30 Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.578718 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"208a51e1-6d1d-4dc4-be5e-fa414dd87c53","Type":"ContainerStarted","Data":"c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a"} Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.578788 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.579210 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="proxy-httpd" containerID="cri-o://c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a" gracePeriod=30 Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.579291 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="sg-core" containerID="cri-o://2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db" gracePeriod=30 Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.579341 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="ceilometer-notification-agent" containerID="cri-o://69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e" gracePeriod=30 Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.586769 4699 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.586796 4699 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.586809 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.586818 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c2rv\" (UniqueName: \"kubernetes.io/projected/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-kube-api-access-7c2rv\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.586845 4699 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.586854 4699 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.586863 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.586872 4699 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42e724c-224e-4c68-b5b4-b72d72d4ded8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.593511 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03f1bc3b-c587-4c47-bbc2-3dca2240d30c" (UID: "03f1bc3b-c587-4c47-bbc2-3dca2240d30c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.606522 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vx5jv" event={"ID":"ef20f352-fa9c-4bc8-875d-d537f00f75d5","Type":"ContainerStarted","Data":"b4034fed15cab382c6c5fd47ff21f822b9c9aa9789392181d8ca9fe59c0d233d"} Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.614033 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.599954267 podStartE2EDuration="10.61401043s" podCreationTimestamp="2026-02-26 11:33:05 +0000 UTC" firstStartedPulling="2026-02-26 11:33:06.960653761 +0000 UTC m=+1332.771480215" lastFinishedPulling="2026-02-26 11:33:14.974709954 +0000 UTC m=+1340.785536378" observedRunningTime="2026-02-26 11:33:15.60642938 +0000 UTC m=+1341.417255814" watchObservedRunningTime="2026-02-26 11:33:15.61401043 +0000 UTC m=+1341.424836864" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.639347 4699 scope.go:117] "RemoveContainer" containerID="96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.690416 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vx5jv" podStartSLOduration=2.069838932 podStartE2EDuration="9.690393668s" podCreationTimestamp="2026-02-26 11:33:06 +0000 UTC" firstStartedPulling="2026-02-26 11:33:07.354754855 +0000 UTC m=+1333.165581289" lastFinishedPulling="2026-02-26 11:33:14.975309581 +0000 UTC m=+1340.786136025" observedRunningTime="2026-02-26 11:33:15.627984426 +0000 UTC m=+1341.438810860" watchObservedRunningTime="2026-02-26 11:33:15.690393668 +0000 UTC m=+1341.501220102" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.691602 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-config-data" (OuterVolumeSpecName: "config-data") pod "03f1bc3b-c587-4c47-bbc2-3dca2240d30c" (UID: "03f1bc3b-c587-4c47-bbc2-3dca2240d30c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.693160 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.723768 4699 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.724047 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.731171 4699 scope.go:117] "RemoveContainer" containerID="9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.738484 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:15 crc kubenswrapper[4699]: E0226 11:33:15.748404 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2\": container with ID starting with 9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2 not found: ID does not exist" containerID="9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.748448 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2"} err="failed to get container status \"9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2\": rpc error: code = NotFound desc = could not find container \"9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2\": container with ID starting with 9c85812a27867e8983c7de4f13e637e2314c74c4cdd4c4523f56efe2ed2922e2 not found: ID does not exist" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.748479 4699 scope.go:117] "RemoveContainer" containerID="96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.749919 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:15 crc kubenswrapper[4699]: E0226 11:33:15.750070 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558\": container with ID starting with 96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558 not found: ID does not exist" containerID="96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.750129 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558"} err="failed to get container status \"96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558\": rpc error: code = NotFound desc = could not find container \"96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558\": container with ID starting with 96b8b1dbd04dece91d5638035ad1782637a57aa6ee4d0c482a41be17ca67c558 not found: ID does not exist" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.750160 4699 scope.go:117] "RemoveContainer" containerID="178382a3de582f32c10d899adfcff30626331736ab73b2469c8ca1b37fab0c4c" Feb 26 11:33:15 crc kubenswrapper[4699]: E0226 11:33:15.750406 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42e724c-224e-4c68-b5b4-b72d72d4ded8" containerName="glance-httpd" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.750428 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42e724c-224e-4c68-b5b4-b72d72d4ded8" containerName="glance-httpd" Feb 26 11:33:15 crc kubenswrapper[4699]: E0226 11:33:15.750468 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f1bc3b-c587-4c47-bbc2-3dca2240d30c" containerName="glance-log" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.750475 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f1bc3b-c587-4c47-bbc2-3dca2240d30c" containerName="glance-log" Feb 26 11:33:15 crc kubenswrapper[4699]: E0226 11:33:15.750483 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42e724c-224e-4c68-b5b4-b72d72d4ded8" containerName="glance-log" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.750489 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42e724c-224e-4c68-b5b4-b72d72d4ded8" containerName="glance-log" Feb 26 11:33:15 crc kubenswrapper[4699]: E0226 11:33:15.750500 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f1bc3b-c587-4c47-bbc2-3dca2240d30c" containerName="glance-httpd" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.750507 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f1bc3b-c587-4c47-bbc2-3dca2240d30c" containerName="glance-httpd" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.750655 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f1bc3b-c587-4c47-bbc2-3dca2240d30c" containerName="glance-httpd" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.750673 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42e724c-224e-4c68-b5b4-b72d72d4ded8" containerName="glance-log" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.750681 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f1bc3b-c587-4c47-bbc2-3dca2240d30c" containerName="glance-log" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.750689 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42e724c-224e-4c68-b5b4-b72d72d4ded8" containerName="glance-httpd" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.751744 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.757768 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.757836 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.770023 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.795498 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03f1bc3b-c587-4c47-bbc2-3dca2240d30c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.795538 4699 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.800067 4699 scope.go:117] "RemoveContainer" containerID="6389553c6ab212c6bdd09de56f8a6c0bc3ab110475816ef41318a3d8e60aa198" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.896826 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.896882 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-logs\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.896990 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.897170 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-config-data\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.897212 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-scripts\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.897232 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4xhd\" (UniqueName: \"kubernetes.io/projected/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-kube-api-access-b4xhd\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.897283 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.897299 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.977640 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.987725 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.999161 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-config-data\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.999215 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-scripts\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.999234 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4xhd\" (UniqueName: \"kubernetes.io/projected/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-kube-api-access-b4xhd\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.999266 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.999283 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.999338 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.999368 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-logs\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.999436 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:15 crc kubenswrapper[4699]: I0226 11:33:15.999922 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.003998 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.004306 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.004956 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.005831 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.011677 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.015383 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-logs\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.015598 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.017092 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.019720 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-scripts\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.054851 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4xhd\" (UniqueName: \"kubernetes.io/projected/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-kube-api-access-b4xhd\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.055872 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.056945 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c58ea0a-4ad4-47cf-8976-a004ef7e56da-config-data\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.069979 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9c58ea0a-4ad4-47cf-8976-a004ef7e56da\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.101258 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.101299 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k462j\" (UniqueName: \"kubernetes.io/projected/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-kube-api-access-k462j\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.101371 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.101406 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-logs\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.101429 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.101482 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.101524 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.101540 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.202754 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k462j\" (UniqueName: \"kubernetes.io/projected/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-kube-api-access-k462j\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.202876 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.202937 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-logs\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.202966 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.203032 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.203084 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.203103 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.203136 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.203188 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.203594 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-logs\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.203893 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.207183 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.209686 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.210094 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.212310 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.232522 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k462j\" (UniqueName: \"kubernetes.io/projected/796738f1-8a6c-4e91-bdfe-bee2f252b3fc-kube-api-access-k462j\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.240756 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"796738f1-8a6c-4e91-bdfe-bee2f252b3fc\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.275904 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f1bc3b-c587-4c47-bbc2-3dca2240d30c" path="/var/lib/kubelet/pods/03f1bc3b-c587-4c47-bbc2-3dca2240d30c/volumes" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.276706 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42e724c-224e-4c68-b5b4-b72d72d4ded8" path="/var/lib/kubelet/pods/d42e724c-224e-4c68-b5b4-b72d72d4ded8/volumes" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.371435 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.445520 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.623811 4699 generic.go:334] "Generic (PLEG): container finished" podID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerID="c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a" exitCode=0 Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.623851 4699 generic.go:334] "Generic (PLEG): container finished" podID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerID="2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db" exitCode=2 Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.623864 4699 generic.go:334] "Generic (PLEG): container finished" podID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerID="69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e" exitCode=0 Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.624796 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"208a51e1-6d1d-4dc4-be5e-fa414dd87c53","Type":"ContainerDied","Data":"c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a"} Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.624818 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"208a51e1-6d1d-4dc4-be5e-fa414dd87c53","Type":"ContainerDied","Data":"2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db"} Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.624829 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"208a51e1-6d1d-4dc4-be5e-fa414dd87c53","Type":"ContainerDied","Data":"69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e"} Feb 26 11:33:16 crc kubenswrapper[4699]: I0226 11:33:16.982270 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.072149 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.525355 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.632183 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-combined-ca-bundle\") pod \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.632229 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-config-data\") pod \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.632286 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-log-httpd\") pod \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.632306 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-run-httpd\") pod \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.632330 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snlzj\" (UniqueName: \"kubernetes.io/projected/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-kube-api-access-snlzj\") pod \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.632363 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-scripts\") pod \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.632398 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-sg-core-conf-yaml\") pod \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\" (UID: \"208a51e1-6d1d-4dc4-be5e-fa414dd87c53\") " Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.633394 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "208a51e1-6d1d-4dc4-be5e-fa414dd87c53" (UID: "208a51e1-6d1d-4dc4-be5e-fa414dd87c53"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.634828 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "208a51e1-6d1d-4dc4-be5e-fa414dd87c53" (UID: "208a51e1-6d1d-4dc4-be5e-fa414dd87c53"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.640397 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-scripts" (OuterVolumeSpecName: "scripts") pod "208a51e1-6d1d-4dc4-be5e-fa414dd87c53" (UID: "208a51e1-6d1d-4dc4-be5e-fa414dd87c53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.640858 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9c58ea0a-4ad4-47cf-8976-a004ef7e56da","Type":"ContainerStarted","Data":"3529c7f0d736937a2fd1b50e08c54631e6f695cc73430fabf4e692c1f92856ce"} Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.641075 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-kube-api-access-snlzj" (OuterVolumeSpecName: "kube-api-access-snlzj") pod "208a51e1-6d1d-4dc4-be5e-fa414dd87c53" (UID: "208a51e1-6d1d-4dc4-be5e-fa414dd87c53"). InnerVolumeSpecName "kube-api-access-snlzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.646906 4699 generic.go:334] "Generic (PLEG): container finished" podID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerID="613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc" exitCode=0 Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.646945 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"208a51e1-6d1d-4dc4-be5e-fa414dd87c53","Type":"ContainerDied","Data":"613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc"} Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.647009 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"208a51e1-6d1d-4dc4-be5e-fa414dd87c53","Type":"ContainerDied","Data":"f5db747a1df1457a54dc26f8dc7732ec37162001229405d6e3c5d95928877c82"} Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.647011 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.647029 4699 scope.go:117] "RemoveContainer" containerID="c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.648763 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"796738f1-8a6c-4e91-bdfe-bee2f252b3fc","Type":"ContainerStarted","Data":"1432b03cd9815938adecfe960a1371ee5d84e13b1503b4e8a86ec0e75efcacdb"} Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.691471 4699 scope.go:117] "RemoveContainer" containerID="2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.693983 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "208a51e1-6d1d-4dc4-be5e-fa414dd87c53" (UID: "208a51e1-6d1d-4dc4-be5e-fa414dd87c53"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.711723 4699 scope.go:117] "RemoveContainer" containerID="69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.734154 4699 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.734191 4699 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.734203 4699 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.734217 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snlzj\" (UniqueName: \"kubernetes.io/projected/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-kube-api-access-snlzj\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.734232 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.734653 4699 scope.go:117] "RemoveContainer" containerID="613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.750761 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "208a51e1-6d1d-4dc4-be5e-fa414dd87c53" (UID: "208a51e1-6d1d-4dc4-be5e-fa414dd87c53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.778604 4699 scope.go:117] "RemoveContainer" containerID="c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a" Feb 26 11:33:17 crc kubenswrapper[4699]: E0226 11:33:17.779777 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a\": container with ID starting with c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a not found: ID does not exist" containerID="c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.779817 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a"} err="failed to get container status \"c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a\": rpc error: code = NotFound desc = could not find container \"c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a\": container with ID starting with c2d216c82faf1eaa8cebf746f867d73e9bdd46ef39c2375acff0d8012285a29a not found: ID does not exist" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.779845 4699 scope.go:117] "RemoveContainer" containerID="2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db" Feb 26 11:33:17 crc kubenswrapper[4699]: E0226 11:33:17.780544 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db\": container with ID starting with 2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db not found: ID does not exist" containerID="2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.780581 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db"} err="failed to get container status \"2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db\": rpc error: code = NotFound desc = could not find container \"2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db\": container with ID starting with 2ba5f61bbedbf25c6d3fc6052b14a28f362683db8e3bec9d0ffc62f27a5a70db not found: ID does not exist" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.780606 4699 scope.go:117] "RemoveContainer" containerID="69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e" Feb 26 11:33:17 crc kubenswrapper[4699]: E0226 11:33:17.781267 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e\": container with ID starting with 69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e not found: ID does not exist" containerID="69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.781313 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e"} err="failed to get container status \"69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e\": rpc error: code = NotFound desc = could not find container \"69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e\": container with ID starting with 69a68218e2ed5906b1f2da6c10e033f2dc29321a9daa1c590a95092e50555d5e not found: ID does not exist" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.781343 4699 scope.go:117] "RemoveContainer" containerID="613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc" Feb 26 11:33:17 crc kubenswrapper[4699]: E0226 11:33:17.781934 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc\": container with ID starting with 613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc not found: ID does not exist" containerID="613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.781969 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc"} err="failed to get container status \"613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc\": rpc error: code = NotFound desc = could not find container \"613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc\": container with ID starting with 613ad9080c787405540b5bbf9c6327648fb6d0e715ed0b9269326172474014dc not found: ID does not exist" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.788060 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-config-data" (OuterVolumeSpecName: "config-data") pod "208a51e1-6d1d-4dc4-be5e-fa414dd87c53" (UID: "208a51e1-6d1d-4dc4-be5e-fa414dd87c53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.836656 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:17 crc kubenswrapper[4699]: I0226 11:33:17.836719 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/208a51e1-6d1d-4dc4-be5e-fa414dd87c53-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:17.996253 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.003499 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.020818 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:18 crc kubenswrapper[4699]: E0226 11:33:18.021500 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="ceilometer-central-agent" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.021574 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="ceilometer-central-agent" Feb 26 11:33:18 crc kubenswrapper[4699]: E0226 11:33:18.021684 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="sg-core" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.021934 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="sg-core" Feb 26 11:33:18 crc kubenswrapper[4699]: E0226 11:33:18.031576 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="proxy-httpd" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.031870 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="proxy-httpd" Feb 26 11:33:18 crc kubenswrapper[4699]: E0226 11:33:18.031962 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="ceilometer-notification-agent" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.032024 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="ceilometer-notification-agent" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.044300 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="ceilometer-notification-agent" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.044533 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="proxy-httpd" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.044637 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="sg-core" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.044697 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" containerName="ceilometer-central-agent" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.046601 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.047013 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.049839 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.050078 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.142039 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-log-httpd\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.142084 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.142108 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-config-data\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.142371 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwdvm\" (UniqueName: \"kubernetes.io/projected/4b59e03f-0c75-40b0-9eb3-d5113163f420-kube-api-access-jwdvm\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.142453 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-scripts\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.142577 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-run-httpd\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.142666 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.244801 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwdvm\" (UniqueName: \"kubernetes.io/projected/4b59e03f-0c75-40b0-9eb3-d5113163f420-kube-api-access-jwdvm\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.244872 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-scripts\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.244923 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-run-httpd\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.244958 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.245008 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-log-httpd\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.245027 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.245048 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-config-data\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.246581 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-run-httpd\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.246930 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-log-httpd\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.250085 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.254601 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-scripts\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.260799 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.263005 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-config-data\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.266993 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwdvm\" (UniqueName: \"kubernetes.io/projected/4b59e03f-0c75-40b0-9eb3-d5113163f420-kube-api-access-jwdvm\") pod \"ceilometer-0\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.282673 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="208a51e1-6d1d-4dc4-be5e-fa414dd87c53" path="/var/lib/kubelet/pods/208a51e1-6d1d-4dc4-be5e-fa414dd87c53/volumes" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.370085 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.665323 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"796738f1-8a6c-4e91-bdfe-bee2f252b3fc","Type":"ContainerStarted","Data":"2b8fee4ffc6d987f733fcb660517e174087b7a69049cd4a1545a4a414dc25609"} Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.665623 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"796738f1-8a6c-4e91-bdfe-bee2f252b3fc","Type":"ContainerStarted","Data":"8d8ab9d1111be5f66f0565bbfdfa83bf5512fcfeaabe44e4d0202b0f795ac56d"} Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.670381 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9c58ea0a-4ad4-47cf-8976-a004ef7e56da","Type":"ContainerStarted","Data":"eda23f2ca8003c71c7bdfb45ca5b281325183f5eabcff56210b0b35d70f7be79"} Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.670409 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9c58ea0a-4ad4-47cf-8976-a004ef7e56da","Type":"ContainerStarted","Data":"2d453a4544f70f5f76c564c5d804434402017e95a912a871437a8d52c894ee6e"} Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.718960 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.71894036 podStartE2EDuration="3.71894036s" podCreationTimestamp="2026-02-26 11:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:33:18.708504527 +0000 UTC m=+1344.519330971" watchObservedRunningTime="2026-02-26 11:33:18.71894036 +0000 UTC m=+1344.529766794" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.738880 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.7388649689999998 podStartE2EDuration="3.738864969s" podCreationTimestamp="2026-02-26 11:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:33:18.734455271 +0000 UTC m=+1344.545281715" watchObservedRunningTime="2026-02-26 11:33:18.738864969 +0000 UTC m=+1344.549691403" Feb 26 11:33:18 crc kubenswrapper[4699]: I0226 11:33:18.757223 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:19 crc kubenswrapper[4699]: I0226 11:33:19.683545 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b59e03f-0c75-40b0-9eb3-d5113163f420","Type":"ContainerStarted","Data":"0e70588ca2c32d2c8bf18e61605cae154752eb6909030e1d1477c1cf1b1f9f0c"} Feb 26 11:33:20 crc kubenswrapper[4699]: I0226 11:33:20.694512 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b59e03f-0c75-40b0-9eb3-d5113163f420","Type":"ContainerStarted","Data":"04dcf7e8e201497d1cf45ed5c29abe7b3a178bdefc6cc1cf11f3cbae4131ffe7"} Feb 26 11:33:22 crc kubenswrapper[4699]: I0226 11:33:22.723255 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b59e03f-0c75-40b0-9eb3-d5113163f420","Type":"ContainerStarted","Data":"0a5d89be89958727d068c6f547173e1db9e09eeaa55949f9e4b10646a2418098"} Feb 26 11:33:22 crc kubenswrapper[4699]: I0226 11:33:22.723833 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b59e03f-0c75-40b0-9eb3-d5113163f420","Type":"ContainerStarted","Data":"1e8f4ef353b62f20e3fa0c0b216ab5527d39228a1eccacac6ba930465493a7ed"} Feb 26 11:33:24 crc kubenswrapper[4699]: I0226 11:33:24.741450 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b59e03f-0c75-40b0-9eb3-d5113163f420","Type":"ContainerStarted","Data":"69f610b3a3266f67627b13f99326d06ba576d343b85ee61005b092a805c73f19"} Feb 26 11:33:24 crc kubenswrapper[4699]: I0226 11:33:24.741923 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:33:24 crc kubenswrapper[4699]: I0226 11:33:24.762729 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.308242364 podStartE2EDuration="7.762712938s" podCreationTimestamp="2026-02-26 11:33:17 +0000 UTC" firstStartedPulling="2026-02-26 11:33:18.74889359 +0000 UTC m=+1344.559720014" lastFinishedPulling="2026-02-26 11:33:24.203364154 +0000 UTC m=+1350.014190588" observedRunningTime="2026-02-26 11:33:24.756995692 +0000 UTC m=+1350.567822136" watchObservedRunningTime="2026-02-26 11:33:24.762712938 +0000 UTC m=+1350.573539372" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.372510 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.372816 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.414530 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.426599 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.446960 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.447010 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.481613 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.496344 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.760336 4699 generic.go:334] "Generic (PLEG): container finished" podID="ef20f352-fa9c-4bc8-875d-d537f00f75d5" containerID="b4034fed15cab382c6c5fd47ff21f822b9c9aa9789392181d8ca9fe59c0d233d" exitCode=0 Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.760405 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vx5jv" event={"ID":"ef20f352-fa9c-4bc8-875d-d537f00f75d5","Type":"ContainerDied","Data":"b4034fed15cab382c6c5fd47ff21f822b9c9aa9789392181d8ca9fe59c0d233d"} Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.761168 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.761203 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.761216 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 11:33:26 crc kubenswrapper[4699]: I0226 11:33:26.761231 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.153136 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.224279 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nmzl\" (UniqueName: \"kubernetes.io/projected/ef20f352-fa9c-4bc8-875d-d537f00f75d5-kube-api-access-7nmzl\") pod \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.225364 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-config-data\") pod \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.225462 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-combined-ca-bundle\") pod \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.225736 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-scripts\") pod \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\" (UID: \"ef20f352-fa9c-4bc8-875d-d537f00f75d5\") " Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.235831 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef20f352-fa9c-4bc8-875d-d537f00f75d5-kube-api-access-7nmzl" (OuterVolumeSpecName: "kube-api-access-7nmzl") pod "ef20f352-fa9c-4bc8-875d-d537f00f75d5" (UID: "ef20f352-fa9c-4bc8-875d-d537f00f75d5"). InnerVolumeSpecName "kube-api-access-7nmzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.237467 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-scripts" (OuterVolumeSpecName: "scripts") pod "ef20f352-fa9c-4bc8-875d-d537f00f75d5" (UID: "ef20f352-fa9c-4bc8-875d-d537f00f75d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.261038 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-config-data" (OuterVolumeSpecName: "config-data") pod "ef20f352-fa9c-4bc8-875d-d537f00f75d5" (UID: "ef20f352-fa9c-4bc8-875d-d537f00f75d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.265959 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef20f352-fa9c-4bc8-875d-d537f00f75d5" (UID: "ef20f352-fa9c-4bc8-875d-d537f00f75d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.327889 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.327922 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nmzl\" (UniqueName: \"kubernetes.io/projected/ef20f352-fa9c-4bc8-875d-d537f00f75d5-kube-api-access-7nmzl\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.327936 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.327945 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef20f352-fa9c-4bc8-875d-d537f00f75d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.706970 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.715295 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.784492 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vx5jv" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.785048 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vx5jv" event={"ID":"ef20f352-fa9c-4bc8-875d-d537f00f75d5","Type":"ContainerDied","Data":"eb24292f1cef0da184d4ae6d21a79311c33325b5ebc6fce8d6d4a49689b83c5c"} Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.785074 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb24292f1cef0da184d4ae6d21a79311c33325b5ebc6fce8d6d4a49689b83c5c" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.813914 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.814045 4699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.841183 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.889469 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 11:33:28 crc kubenswrapper[4699]: E0226 11:33:28.890020 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef20f352-fa9c-4bc8-875d-d537f00f75d5" containerName="nova-cell0-conductor-db-sync" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.890050 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef20f352-fa9c-4bc8-875d-d537f00f75d5" containerName="nova-cell0-conductor-db-sync" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.890280 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef20f352-fa9c-4bc8-875d-d537f00f75d5" containerName="nova-cell0-conductor-db-sync" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.891066 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.896692 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.896937 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wwbkn" Feb 26 11:33:28 crc kubenswrapper[4699]: I0226 11:33:28.900441 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.042029 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg4kv\" (UniqueName: \"kubernetes.io/projected/2ff15a2d-962f-421b-be00-e3bf6ef22612-kube-api-access-xg4kv\") pod \"nova-cell0-conductor-0\" (UID: \"2ff15a2d-962f-421b-be00-e3bf6ef22612\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.042127 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff15a2d-962f-421b-be00-e3bf6ef22612-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2ff15a2d-962f-421b-be00-e3bf6ef22612\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.042197 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff15a2d-962f-421b-be00-e3bf6ef22612-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2ff15a2d-962f-421b-be00-e3bf6ef22612\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.144411 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg4kv\" (UniqueName: \"kubernetes.io/projected/2ff15a2d-962f-421b-be00-e3bf6ef22612-kube-api-access-xg4kv\") pod \"nova-cell0-conductor-0\" (UID: \"2ff15a2d-962f-421b-be00-e3bf6ef22612\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.144476 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff15a2d-962f-421b-be00-e3bf6ef22612-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2ff15a2d-962f-421b-be00-e3bf6ef22612\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.144563 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff15a2d-962f-421b-be00-e3bf6ef22612-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2ff15a2d-962f-421b-be00-e3bf6ef22612\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.158922 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff15a2d-962f-421b-be00-e3bf6ef22612-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2ff15a2d-962f-421b-be00-e3bf6ef22612\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.160928 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff15a2d-962f-421b-be00-e3bf6ef22612-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2ff15a2d-962f-421b-be00-e3bf6ef22612\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.178771 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg4kv\" (UniqueName: \"kubernetes.io/projected/2ff15a2d-962f-421b-be00-e3bf6ef22612-kube-api-access-xg4kv\") pod \"nova-cell0-conductor-0\" (UID: \"2ff15a2d-962f-421b-be00-e3bf6ef22612\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.230598 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.734048 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 11:33:29 crc kubenswrapper[4699]: I0226 11:33:29.804578 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2ff15a2d-962f-421b-be00-e3bf6ef22612","Type":"ContainerStarted","Data":"b17a43ba82df7a299d8ba1b054c19792fda9485d32c2d22c81f4c0723103e26b"} Feb 26 11:33:30 crc kubenswrapper[4699]: I0226 11:33:30.816548 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2ff15a2d-962f-421b-be00-e3bf6ef22612","Type":"ContainerStarted","Data":"abb700147106b7c9a2ad04b5cc3a70a9bca9d60fc1eb88d1c997133fc2921acb"} Feb 26 11:33:30 crc kubenswrapper[4699]: I0226 11:33:30.818326 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:30 crc kubenswrapper[4699]: I0226 11:33:30.847561 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.8475409369999998 podStartE2EDuration="2.847540937s" podCreationTimestamp="2026-02-26 11:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:33:30.835352283 +0000 UTC m=+1356.646178707" watchObservedRunningTime="2026-02-26 11:33:30.847540937 +0000 UTC m=+1356.658367371" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.256505 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.797259 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mcdml"] Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.802520 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.810805 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.811250 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.834588 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mcdml"] Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.861557 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-scripts\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.861617 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lsct\" (UniqueName: \"kubernetes.io/projected/f528c9c1-4318-4d46-9b02-43f955e04009-kube-api-access-4lsct\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.861654 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.861687 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-config-data\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.914162 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.915958 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.920142 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.930638 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.963789 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7zp\" (UniqueName: \"kubernetes.io/projected/6d5f37fe-0099-471b-9192-5f52735977b1-kube-api-access-mp7zp\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.963839 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-scripts\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.963874 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lsct\" (UniqueName: \"kubernetes.io/projected/f528c9c1-4318-4d46-9b02-43f955e04009-kube-api-access-4lsct\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.963910 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.963930 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.963966 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-config-data\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.964017 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.973970 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-config-data\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.976070 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-scripts\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:34 crc kubenswrapper[4699]: I0226 11:33:34.978776 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.006578 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lsct\" (UniqueName: \"kubernetes.io/projected/f528c9c1-4318-4d46-9b02-43f955e04009-kube-api-access-4lsct\") pod \"nova-cell0-cell-mapping-mcdml\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.017318 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.025889 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.037645 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.042512 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.043982 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.046632 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.058804 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.065950 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.065986 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-config-data\") pod \"nova-scheduler-0\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.066009 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znmv6\" (UniqueName: \"kubernetes.io/projected/003dad7c-8300-49a9-80d0-99dcad71fa84-kube-api-access-znmv6\") pod \"nova-scheduler-0\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.066077 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp7zp\" (UniqueName: \"kubernetes.io/projected/6d5f37fe-0099-471b-9192-5f52735977b1-kube-api-access-mp7zp\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.066122 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.066200 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.076624 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.098292 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.114485 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.115143 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.130618 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp7zp\" (UniqueName: \"kubernetes.io/projected/6d5f37fe-0099-471b-9192-5f52735977b1-kube-api-access-mp7zp\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.131286 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.137552 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.138490 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.146656 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.183893 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrt7n\" (UniqueName: \"kubernetes.io/projected/3da58d42-6c34-4a38-b9dc-eeeb20542955-kube-api-access-qrt7n\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.184302 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-logs\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.184508 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-config-data\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.184661 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-config-data\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.184778 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64fjz\" (UniqueName: \"kubernetes.io/projected/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-kube-api-access-64fjz\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.184910 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da58d42-6c34-4a38-b9dc-eeeb20542955-logs\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.185022 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.185153 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-config-data\") pod \"nova-scheduler-0\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.185259 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znmv6\" (UniqueName: \"kubernetes.io/projected/003dad7c-8300-49a9-80d0-99dcad71fa84-kube-api-access-znmv6\") pod \"nova-scheduler-0\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.185356 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.185464 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.197895 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.202483 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-5jmd5"] Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.208218 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.210950 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-5jmd5"] Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.213060 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-config-data\") pod \"nova-scheduler-0\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.225374 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znmv6\" (UniqueName: \"kubernetes.io/projected/003dad7c-8300-49a9-80d0-99dcad71fa84-kube-api-access-znmv6\") pod \"nova-scheduler-0\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.247422 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.267853 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.286941 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da58d42-6c34-4a38-b9dc-eeeb20542955-logs\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.287012 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.287041 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gpwp\" (UniqueName: \"kubernetes.io/projected/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-kube-api-access-6gpwp\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.287564 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.287598 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.288008 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.288079 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrt7n\" (UniqueName: \"kubernetes.io/projected/3da58d42-6c34-4a38-b9dc-eeeb20542955-kube-api-access-qrt7n\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.288096 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-config\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.288200 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-logs\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.289238 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-config-data\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.288677 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da58d42-6c34-4a38-b9dc-eeeb20542955-logs\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.289333 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.289191 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-logs\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.289403 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-config-data\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.289455 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-svc\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.289480 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64fjz\" (UniqueName: \"kubernetes.io/projected/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-kube-api-access-64fjz\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.292462 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.292879 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.293478 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-config-data\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.294487 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-config-data\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.309321 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64fjz\" (UniqueName: \"kubernetes.io/projected/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-kube-api-access-64fjz\") pod \"nova-metadata-0\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.312665 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrt7n\" (UniqueName: \"kubernetes.io/projected/3da58d42-6c34-4a38-b9dc-eeeb20542955-kube-api-access-qrt7n\") pod \"nova-api-0\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.394085 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-svc\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.394359 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.394391 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gpwp\" (UniqueName: \"kubernetes.io/projected/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-kube-api-access-6gpwp\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.394416 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.394510 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-config\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.394589 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.396286 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-svc\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.397289 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.397619 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-config\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.406868 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.409946 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.414860 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gpwp\" (UniqueName: \"kubernetes.io/projected/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-kube-api-access-6gpwp\") pod \"dnsmasq-dns-865f5d856f-5jmd5\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.583657 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.598252 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.614684 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.804194 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.824330 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mcdml"] Feb 26 11:33:35 crc kubenswrapper[4699]: W0226 11:33:35.830658 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod003dad7c_8300_49a9_80d0_99dcad71fa84.slice/crio-1ffae0e8d9c34eeb99720f9aac223ffd7663c8e780138c2fbc15f472417c5fda WatchSource:0}: Error finding container 1ffae0e8d9c34eeb99720f9aac223ffd7663c8e780138c2fbc15f472417c5fda: Status 404 returned error can't find the container with id 1ffae0e8d9c34eeb99720f9aac223ffd7663c8e780138c2fbc15f472417c5fda Feb 26 11:33:35 crc kubenswrapper[4699]: W0226 11:33:35.887800 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf528c9c1_4318_4d46_9b02_43f955e04009.slice/crio-53909e629fc2641cca7ffd773dbe454e7b1a7fac09f9589c009aa88c45e195ad WatchSource:0}: Error finding container 53909e629fc2641cca7ffd773dbe454e7b1a7fac09f9589c009aa88c45e195ad: Status 404 returned error can't find the container with id 53909e629fc2641cca7ffd773dbe454e7b1a7fac09f9589c009aa88c45e195ad Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.896854 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"003dad7c-8300-49a9-80d0-99dcad71fa84","Type":"ContainerStarted","Data":"1ffae0e8d9c34eeb99720f9aac223ffd7663c8e780138c2fbc15f472417c5fda"} Feb 26 11:33:35 crc kubenswrapper[4699]: I0226 11:33:35.939566 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.102695 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dz84d"] Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.104245 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.106844 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.106938 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.112700 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dz84d"] Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.160416 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.225492 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-config-data\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.225599 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5wlj\" (UniqueName: \"kubernetes.io/projected/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-kube-api-access-v5wlj\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.225697 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-scripts\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.225718 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.328262 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.328319 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-scripts\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.328375 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-config-data\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.328460 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5wlj\" (UniqueName: \"kubernetes.io/projected/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-kube-api-access-v5wlj\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.340618 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.347793 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-5jmd5"] Feb 26 11:33:36 crc kubenswrapper[4699]: W0226 11:33:36.348242 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd4545ea_b0c7_4fd6_9636_a826457d4e3a.slice/crio-45f061eac629f86d341067b2a329a3980fab0dd355c7b346b4b639f44c038307 WatchSource:0}: Error finding container 45f061eac629f86d341067b2a329a3980fab0dd355c7b346b4b639f44c038307: Status 404 returned error can't find the container with id 45f061eac629f86d341067b2a329a3980fab0dd355c7b346b4b639f44c038307 Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.350089 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-config-data\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.350445 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-scripts\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.356033 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5wlj\" (UniqueName: \"kubernetes.io/projected/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-kube-api-access-v5wlj\") pod \"nova-cell1-conductor-db-sync-dz84d\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.364986 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.610849 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.951292 4699 generic.go:334] "Generic (PLEG): container finished" podID="90cd25a3-8ac5-49d2-b3a1-79c773a0b394" containerID="9bee82430e4d84a9497e3680da14bb7fec649ba1905937229370f30514994319" exitCode=0 Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.951622 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" event={"ID":"90cd25a3-8ac5-49d2-b3a1-79c773a0b394","Type":"ContainerDied","Data":"9bee82430e4d84a9497e3680da14bb7fec649ba1905937229370f30514994319"} Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.951663 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" event={"ID":"90cd25a3-8ac5-49d2-b3a1-79c773a0b394","Type":"ContainerStarted","Data":"e1b123469f14c639c8594d09af4903ba398bf0ca95a50aeadc71f0627b95230b"} Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.957242 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d5f37fe-0099-471b-9192-5f52735977b1","Type":"ContainerStarted","Data":"891a5f2f8e4df93b7d5e317f0bde0ca23ec3dcb73c4f3ad638024da213a38a6c"} Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.966959 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mcdml" event={"ID":"f528c9c1-4318-4d46-9b02-43f955e04009","Type":"ContainerStarted","Data":"2cee4e67f7ca1be08a16734a80281eca2dc16bb5d20a6d285f430706b65292fe"} Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.967019 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mcdml" event={"ID":"f528c9c1-4318-4d46-9b02-43f955e04009","Type":"ContainerStarted","Data":"53909e629fc2641cca7ffd773dbe454e7b1a7fac09f9589c009aa88c45e195ad"} Feb 26 11:33:36 crc kubenswrapper[4699]: I0226 11:33:36.996138 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3da58d42-6c34-4a38-b9dc-eeeb20542955","Type":"ContainerStarted","Data":"0001ca71eaea85ec4a7157192b885fb03750c2a30c308dc7404b715439e990b4"} Feb 26 11:33:37 crc kubenswrapper[4699]: I0226 11:33:37.010422 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mcdml" podStartSLOduration=3.010398302 podStartE2EDuration="3.010398302s" podCreationTimestamp="2026-02-26 11:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:33:36.993061839 +0000 UTC m=+1362.803888273" watchObservedRunningTime="2026-02-26 11:33:37.010398302 +0000 UTC m=+1362.821224746" Feb 26 11:33:37 crc kubenswrapper[4699]: I0226 11:33:37.017105 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd4545ea-b0c7-4fd6-9636-a826457d4e3a","Type":"ContainerStarted","Data":"45f061eac629f86d341067b2a329a3980fab0dd355c7b346b4b639f44c038307"} Feb 26 11:33:37 crc kubenswrapper[4699]: I0226 11:33:37.146802 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dz84d"] Feb 26 11:33:37 crc kubenswrapper[4699]: W0226 11:33:37.664327 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5fb37dd_bd18_4ada_97c4_3ff3e3555d8a.slice/crio-eb1129c3d9136c15c6047e7fc0342dfc057c4e5270886b2bb6ec9a038a974469 WatchSource:0}: Error finding container eb1129c3d9136c15c6047e7fc0342dfc057c4e5270886b2bb6ec9a038a974469: Status 404 returned error can't find the container with id eb1129c3d9136c15c6047e7fc0342dfc057c4e5270886b2bb6ec9a038a974469 Feb 26 11:33:38 crc kubenswrapper[4699]: I0226 11:33:38.027630 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dz84d" event={"ID":"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a","Type":"ContainerStarted","Data":"eb1129c3d9136c15c6047e7fc0342dfc057c4e5270886b2bb6ec9a038a974469"} Feb 26 11:33:38 crc kubenswrapper[4699]: I0226 11:33:38.515668 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:38 crc kubenswrapper[4699]: I0226 11:33:38.558012 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.062561 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd4545ea-b0c7-4fd6-9636-a826457d4e3a","Type":"ContainerStarted","Data":"3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5"} Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.065384 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" event={"ID":"90cd25a3-8ac5-49d2-b3a1-79c773a0b394","Type":"ContainerStarted","Data":"50c7ddb03e58cd9791ab6f41d1755213bce0ea0826aec0f5b6934548dfaf9782"} Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.065567 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.071100 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dz84d" event={"ID":"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a","Type":"ContainerStarted","Data":"6a0914a3db1c0b6e1b3a5a9cf2e1d8ac0e44a6dc0eb35fc159954e4b3f365a3d"} Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.076678 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6d5f37fe-0099-471b-9192-5f52735977b1" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7" gracePeriod=30 Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.076771 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d5f37fe-0099-471b-9192-5f52735977b1","Type":"ContainerStarted","Data":"9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7"} Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.094192 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" podStartSLOduration=5.094176468 podStartE2EDuration="5.094176468s" podCreationTimestamp="2026-02-26 11:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:33:40.084190418 +0000 UTC m=+1365.895016872" watchObservedRunningTime="2026-02-26 11:33:40.094176468 +0000 UTC m=+1365.905002902" Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.100008 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"003dad7c-8300-49a9-80d0-99dcad71fa84","Type":"ContainerStarted","Data":"24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6"} Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.103228 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3da58d42-6c34-4a38-b9dc-eeeb20542955","Type":"ContainerStarted","Data":"1c92a80645afa4e46d1addaafa746637c845088f3cba1406f81a67f4dfe1af22"} Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.105893 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-dz84d" podStartSLOduration=4.105874878 podStartE2EDuration="4.105874878s" podCreationTimestamp="2026-02-26 11:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:33:40.096905237 +0000 UTC m=+1365.907731671" watchObservedRunningTime="2026-02-26 11:33:40.105874878 +0000 UTC m=+1365.916701312" Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.125663 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.77719021 podStartE2EDuration="6.125646262s" podCreationTimestamp="2026-02-26 11:33:34 +0000 UTC" firstStartedPulling="2026-02-26 11:33:36.018213178 +0000 UTC m=+1361.829039602" lastFinishedPulling="2026-02-26 11:33:39.36666922 +0000 UTC m=+1365.177495654" observedRunningTime="2026-02-26 11:33:40.113337474 +0000 UTC m=+1365.924163908" watchObservedRunningTime="2026-02-26 11:33:40.125646262 +0000 UTC m=+1365.936472706" Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.138014 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.623248039 podStartE2EDuration="6.13799201s" podCreationTimestamp="2026-02-26 11:33:34 +0000 UTC" firstStartedPulling="2026-02-26 11:33:35.852447144 +0000 UTC m=+1361.663273578" lastFinishedPulling="2026-02-26 11:33:39.367191115 +0000 UTC m=+1365.178017549" observedRunningTime="2026-02-26 11:33:40.130223085 +0000 UTC m=+1365.941049519" watchObservedRunningTime="2026-02-26 11:33:40.13799201 +0000 UTC m=+1365.948818444" Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.250338 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:33:40 crc kubenswrapper[4699]: I0226 11:33:40.271748 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 11:33:41 crc kubenswrapper[4699]: I0226 11:33:41.169410 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3da58d42-6c34-4a38-b9dc-eeeb20542955","Type":"ContainerStarted","Data":"9d7c08d80dbb246a87244e1998a0a8a3673755ff4e13d9dbf8e24611acf9af5c"} Feb 26 11:33:41 crc kubenswrapper[4699]: I0226 11:33:41.181879 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd4545ea-b0c7-4fd6-9636-a826457d4e3a","Type":"ContainerStarted","Data":"179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd"} Feb 26 11:33:41 crc kubenswrapper[4699]: I0226 11:33:41.182257 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fd4545ea-b0c7-4fd6-9636-a826457d4e3a" containerName="nova-metadata-metadata" containerID="cri-o://179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd" gracePeriod=30 Feb 26 11:33:41 crc kubenswrapper[4699]: I0226 11:33:41.182247 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fd4545ea-b0c7-4fd6-9636-a826457d4e3a" containerName="nova-metadata-log" containerID="cri-o://3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5" gracePeriod=30 Feb 26 11:33:41 crc kubenswrapper[4699]: I0226 11:33:41.213252 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.019004042 podStartE2EDuration="6.213227946s" podCreationTimestamp="2026-02-26 11:33:35 +0000 UTC" firstStartedPulling="2026-02-26 11:33:36.173979571 +0000 UTC m=+1361.984806005" lastFinishedPulling="2026-02-26 11:33:39.368203465 +0000 UTC m=+1365.179029909" observedRunningTime="2026-02-26 11:33:41.201910618 +0000 UTC m=+1367.012737062" watchObservedRunningTime="2026-02-26 11:33:41.213227946 +0000 UTC m=+1367.024054400" Feb 26 11:33:41 crc kubenswrapper[4699]: I0226 11:33:41.242274 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.221559766 podStartE2EDuration="7.242249509s" podCreationTimestamp="2026-02-26 11:33:34 +0000 UTC" firstStartedPulling="2026-02-26 11:33:36.350462247 +0000 UTC m=+1362.161288681" lastFinishedPulling="2026-02-26 11:33:39.37115199 +0000 UTC m=+1365.181978424" observedRunningTime="2026-02-26 11:33:41.234773532 +0000 UTC m=+1367.045599966" watchObservedRunningTime="2026-02-26 11:33:41.242249509 +0000 UTC m=+1367.053075943" Feb 26 11:33:41 crc kubenswrapper[4699]: I0226 11:33:41.585580 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:33:41 crc kubenswrapper[4699]: I0226 11:33:41.585652 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.178231 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.193584 4699 generic.go:334] "Generic (PLEG): container finished" podID="fd4545ea-b0c7-4fd6-9636-a826457d4e3a" containerID="179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd" exitCode=0 Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.193636 4699 generic.go:334] "Generic (PLEG): container finished" podID="fd4545ea-b0c7-4fd6-9636-a826457d4e3a" containerID="3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5" exitCode=143 Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.193667 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.193684 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd4545ea-b0c7-4fd6-9636-a826457d4e3a","Type":"ContainerDied","Data":"179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd"} Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.193802 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd4545ea-b0c7-4fd6-9636-a826457d4e3a","Type":"ContainerDied","Data":"3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5"} Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.193819 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd4545ea-b0c7-4fd6-9636-a826457d4e3a","Type":"ContainerDied","Data":"45f061eac629f86d341067b2a329a3980fab0dd355c7b346b4b639f44c038307"} Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.193852 4699 scope.go:117] "RemoveContainer" containerID="179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.228958 4699 scope.go:117] "RemoveContainer" containerID="3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.250943 4699 scope.go:117] "RemoveContainer" containerID="179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd" Feb 26 11:33:42 crc kubenswrapper[4699]: E0226 11:33:42.251520 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd\": container with ID starting with 179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd not found: ID does not exist" containerID="179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.251555 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd"} err="failed to get container status \"179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd\": rpc error: code = NotFound desc = could not find container \"179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd\": container with ID starting with 179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd not found: ID does not exist" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.251577 4699 scope.go:117] "RemoveContainer" containerID="3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5" Feb 26 11:33:42 crc kubenswrapper[4699]: E0226 11:33:42.251824 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5\": container with ID starting with 3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5 not found: ID does not exist" containerID="3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.251861 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5"} err="failed to get container status \"3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5\": rpc error: code = NotFound desc = could not find container \"3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5\": container with ID starting with 3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5 not found: ID does not exist" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.251877 4699 scope.go:117] "RemoveContainer" containerID="179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.252255 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd"} err="failed to get container status \"179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd\": rpc error: code = NotFound desc = could not find container \"179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd\": container with ID starting with 179c94fbb230260f4209c907fc41a84062208454b917138ea5aac422ebf727bd not found: ID does not exist" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.252306 4699 scope.go:117] "RemoveContainer" containerID="3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.252743 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5"} err="failed to get container status \"3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5\": rpc error: code = NotFound desc = could not find container \"3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5\": container with ID starting with 3da1172a5fa169e6833849831b27d81d421a9de5b2685cc3e9f6f1b36d5c03a5 not found: ID does not exist" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.366229 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-config-data\") pod \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.366349 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-combined-ca-bundle\") pod \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.366376 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64fjz\" (UniqueName: \"kubernetes.io/projected/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-kube-api-access-64fjz\") pod \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.367388 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-logs\") pod \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\" (UID: \"fd4545ea-b0c7-4fd6-9636-a826457d4e3a\") " Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.367769 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-logs" (OuterVolumeSpecName: "logs") pod "fd4545ea-b0c7-4fd6-9636-a826457d4e3a" (UID: "fd4545ea-b0c7-4fd6-9636-a826457d4e3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.367989 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.373197 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-kube-api-access-64fjz" (OuterVolumeSpecName: "kube-api-access-64fjz") pod "fd4545ea-b0c7-4fd6-9636-a826457d4e3a" (UID: "fd4545ea-b0c7-4fd6-9636-a826457d4e3a"). InnerVolumeSpecName "kube-api-access-64fjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.402298 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd4545ea-b0c7-4fd6-9636-a826457d4e3a" (UID: "fd4545ea-b0c7-4fd6-9636-a826457d4e3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.412997 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-config-data" (OuterVolumeSpecName: "config-data") pod "fd4545ea-b0c7-4fd6-9636-a826457d4e3a" (UID: "fd4545ea-b0c7-4fd6-9636-a826457d4e3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.470074 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.470130 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.470147 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64fjz\" (UniqueName: \"kubernetes.io/projected/fd4545ea-b0c7-4fd6-9636-a826457d4e3a-kube-api-access-64fjz\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.527751 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.549604 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.561834 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:42 crc kubenswrapper[4699]: E0226 11:33:42.562629 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4545ea-b0c7-4fd6-9636-a826457d4e3a" containerName="nova-metadata-log" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.562652 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4545ea-b0c7-4fd6-9636-a826457d4e3a" containerName="nova-metadata-log" Feb 26 11:33:42 crc kubenswrapper[4699]: E0226 11:33:42.562680 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4545ea-b0c7-4fd6-9636-a826457d4e3a" containerName="nova-metadata-metadata" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.562688 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4545ea-b0c7-4fd6-9636-a826457d4e3a" containerName="nova-metadata-metadata" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.562864 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4545ea-b0c7-4fd6-9636-a826457d4e3a" containerName="nova-metadata-log" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.562881 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4545ea-b0c7-4fd6-9636-a826457d4e3a" containerName="nova-metadata-metadata" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.563928 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.567936 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.568056 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.572424 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.572705 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.572753 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0740e570-d27e-4d97-b511-315a9ad45022-logs\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.572785 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxwjp\" (UniqueName: \"kubernetes.io/projected/0740e570-d27e-4d97-b511-315a9ad45022-kube-api-access-wxwjp\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.572916 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-config-data\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.601245 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.673639 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.673716 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.673737 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0740e570-d27e-4d97-b511-315a9ad45022-logs\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.673754 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxwjp\" (UniqueName: \"kubernetes.io/projected/0740e570-d27e-4d97-b511-315a9ad45022-kube-api-access-wxwjp\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.673794 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-config-data\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.674311 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0740e570-d27e-4d97-b511-315a9ad45022-logs\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.678008 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.678413 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-config-data\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.678473 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.703810 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxwjp\" (UniqueName: \"kubernetes.io/projected/0740e570-d27e-4d97-b511-315a9ad45022-kube-api-access-wxwjp\") pod \"nova-metadata-0\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " pod="openstack/nova-metadata-0" Feb 26 11:33:42 crc kubenswrapper[4699]: I0226 11:33:42.894713 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:33:43 crc kubenswrapper[4699]: I0226 11:33:43.366073 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:43 crc kubenswrapper[4699]: W0226 11:33:43.370464 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0740e570_d27e_4d97_b511_315a9ad45022.slice/crio-599226ea072da23c2b7a52b3304a9265a01ddf31d06132414e61defb38999e48 WatchSource:0}: Error finding container 599226ea072da23c2b7a52b3304a9265a01ddf31d06132414e61defb38999e48: Status 404 returned error can't find the container with id 599226ea072da23c2b7a52b3304a9265a01ddf31d06132414e61defb38999e48 Feb 26 11:33:44 crc kubenswrapper[4699]: I0226 11:33:44.215592 4699 generic.go:334] "Generic (PLEG): container finished" podID="f528c9c1-4318-4d46-9b02-43f955e04009" containerID="2cee4e67f7ca1be08a16734a80281eca2dc16bb5d20a6d285f430706b65292fe" exitCode=0 Feb 26 11:33:44 crc kubenswrapper[4699]: I0226 11:33:44.215634 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mcdml" event={"ID":"f528c9c1-4318-4d46-9b02-43f955e04009","Type":"ContainerDied","Data":"2cee4e67f7ca1be08a16734a80281eca2dc16bb5d20a6d285f430706b65292fe"} Feb 26 11:33:44 crc kubenswrapper[4699]: I0226 11:33:44.218190 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0740e570-d27e-4d97-b511-315a9ad45022","Type":"ContainerStarted","Data":"56bcb9d42e1b3abd08748801a926fbec8d7021ec641d0c2f8df7fcff3ae44555"} Feb 26 11:33:44 crc kubenswrapper[4699]: I0226 11:33:44.218251 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0740e570-d27e-4d97-b511-315a9ad45022","Type":"ContainerStarted","Data":"2064f2d54f436918410771cf684160b85d090a94e656dd353921b47f48e5a9bd"} Feb 26 11:33:44 crc kubenswrapper[4699]: I0226 11:33:44.218265 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0740e570-d27e-4d97-b511-315a9ad45022","Type":"ContainerStarted","Data":"599226ea072da23c2b7a52b3304a9265a01ddf31d06132414e61defb38999e48"} Feb 26 11:33:44 crc kubenswrapper[4699]: I0226 11:33:44.258109 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.258079541 podStartE2EDuration="2.258079541s" podCreationTimestamp="2026-02-26 11:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:33:44.248216244 +0000 UTC m=+1370.059042698" watchObservedRunningTime="2026-02-26 11:33:44.258079541 +0000 UTC m=+1370.068905995" Feb 26 11:33:44 crc kubenswrapper[4699]: I0226 11:33:44.271705 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4545ea-b0c7-4fd6-9636-a826457d4e3a" path="/var/lib/kubelet/pods/fd4545ea-b0c7-4fd6-9636-a826457d4e3a/volumes" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.269161 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.307454 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.603363 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.603404 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.616810 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.629692 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.643940 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-combined-ca-bundle\") pod \"f528c9c1-4318-4d46-9b02-43f955e04009\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.644166 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-scripts\") pod \"f528c9c1-4318-4d46-9b02-43f955e04009\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.644281 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-config-data\") pod \"f528c9c1-4318-4d46-9b02-43f955e04009\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.644381 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lsct\" (UniqueName: \"kubernetes.io/projected/f528c9c1-4318-4d46-9b02-43f955e04009-kube-api-access-4lsct\") pod \"f528c9c1-4318-4d46-9b02-43f955e04009\" (UID: \"f528c9c1-4318-4d46-9b02-43f955e04009\") " Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.654972 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-scripts" (OuterVolumeSpecName: "scripts") pod "f528c9c1-4318-4d46-9b02-43f955e04009" (UID: "f528c9c1-4318-4d46-9b02-43f955e04009"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.671699 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f528c9c1-4318-4d46-9b02-43f955e04009-kube-api-access-4lsct" (OuterVolumeSpecName: "kube-api-access-4lsct") pod "f528c9c1-4318-4d46-9b02-43f955e04009" (UID: "f528c9c1-4318-4d46-9b02-43f955e04009"). InnerVolumeSpecName "kube-api-access-4lsct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.711727 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-config-data" (OuterVolumeSpecName: "config-data") pod "f528c9c1-4318-4d46-9b02-43f955e04009" (UID: "f528c9c1-4318-4d46-9b02-43f955e04009"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.715379 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vlzrl"] Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.716051 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" podUID="9fa27ea0-52eb-406f-8256-68b4a471e452" containerName="dnsmasq-dns" containerID="cri-o://b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21" gracePeriod=10 Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.729533 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f528c9c1-4318-4d46-9b02-43f955e04009" (UID: "f528c9c1-4318-4d46-9b02-43f955e04009"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.750836 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.750878 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lsct\" (UniqueName: \"kubernetes.io/projected/f528c9c1-4318-4d46-9b02-43f955e04009-kube-api-access-4lsct\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.750893 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:45 crc kubenswrapper[4699]: I0226 11:33:45.750906 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f528c9c1-4318-4d46-9b02-43f955e04009-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.234535 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.237364 4699 generic.go:334] "Generic (PLEG): container finished" podID="9fa27ea0-52eb-406f-8256-68b4a471e452" containerID="b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21" exitCode=0 Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.237412 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" event={"ID":"9fa27ea0-52eb-406f-8256-68b4a471e452","Type":"ContainerDied","Data":"b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21"} Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.237438 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.237466 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-vlzrl" event={"ID":"9fa27ea0-52eb-406f-8256-68b4a471e452","Type":"ContainerDied","Data":"096662e32232c28cf3046778c91211f7c3482d79260670ba5c8b5347692e739f"} Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.237492 4699 scope.go:117] "RemoveContainer" containerID="b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.240942 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mcdml" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.246496 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mcdml" event={"ID":"f528c9c1-4318-4d46-9b02-43f955e04009","Type":"ContainerDied","Data":"53909e629fc2641cca7ffd773dbe454e7b1a7fac09f9589c009aa88c45e195ad"} Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.246569 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53909e629fc2641cca7ffd773dbe454e7b1a7fac09f9589c009aa88c45e195ad" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.258986 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-swift-storage-0\") pod \"9fa27ea0-52eb-406f-8256-68b4a471e452\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.259222 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-config\") pod \"9fa27ea0-52eb-406f-8256-68b4a471e452\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.259309 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-sb\") pod \"9fa27ea0-52eb-406f-8256-68b4a471e452\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.259341 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-nb\") pod \"9fa27ea0-52eb-406f-8256-68b4a471e452\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.259405 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-svc\") pod \"9fa27ea0-52eb-406f-8256-68b4a471e452\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.259449 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x97ms\" (UniqueName: \"kubernetes.io/projected/9fa27ea0-52eb-406f-8256-68b4a471e452-kube-api-access-x97ms\") pod \"9fa27ea0-52eb-406f-8256-68b4a471e452\" (UID: \"9fa27ea0-52eb-406f-8256-68b4a471e452\") " Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.271824 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa27ea0-52eb-406f-8256-68b4a471e452-kube-api-access-x97ms" (OuterVolumeSpecName: "kube-api-access-x97ms") pod "9fa27ea0-52eb-406f-8256-68b4a471e452" (UID: "9fa27ea0-52eb-406f-8256-68b4a471e452"). InnerVolumeSpecName "kube-api-access-x97ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.274300 4699 scope.go:117] "RemoveContainer" containerID="c85028e72c3a0e7a111082381d8b68740bba7121da4a177d5d4e87fed9daf888" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.316554 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.327489 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9fa27ea0-52eb-406f-8256-68b4a471e452" (UID: "9fa27ea0-52eb-406f-8256-68b4a471e452"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.337709 4699 scope.go:117] "RemoveContainer" containerID="b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21" Feb 26 11:33:46 crc kubenswrapper[4699]: E0226 11:33:46.338223 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21\": container with ID starting with b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21 not found: ID does not exist" containerID="b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.338264 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21"} err="failed to get container status \"b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21\": rpc error: code = NotFound desc = could not find container \"b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21\": container with ID starting with b6434f4c6dfd6f80388c5828582829cf6f0feba6a3ec19ba253ae96d727b8a21 not found: ID does not exist" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.338289 4699 scope.go:117] "RemoveContainer" containerID="c85028e72c3a0e7a111082381d8b68740bba7121da4a177d5d4e87fed9daf888" Feb 26 11:33:46 crc kubenswrapper[4699]: E0226 11:33:46.339406 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c85028e72c3a0e7a111082381d8b68740bba7121da4a177d5d4e87fed9daf888\": container with ID starting with c85028e72c3a0e7a111082381d8b68740bba7121da4a177d5d4e87fed9daf888 not found: ID does not exist" containerID="c85028e72c3a0e7a111082381d8b68740bba7121da4a177d5d4e87fed9daf888" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.339442 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85028e72c3a0e7a111082381d8b68740bba7121da4a177d5d4e87fed9daf888"} err="failed to get container status \"c85028e72c3a0e7a111082381d8b68740bba7121da4a177d5d4e87fed9daf888\": rpc error: code = NotFound desc = could not find container \"c85028e72c3a0e7a111082381d8b68740bba7121da4a177d5d4e87fed9daf888\": container with ID starting with c85028e72c3a0e7a111082381d8b68740bba7121da4a177d5d4e87fed9daf888 not found: ID does not exist" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.344069 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9fa27ea0-52eb-406f-8256-68b4a471e452" (UID: "9fa27ea0-52eb-406f-8256-68b4a471e452"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.359651 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fa27ea0-52eb-406f-8256-68b4a471e452" (UID: "9fa27ea0-52eb-406f-8256-68b4a471e452"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.362071 4699 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.362106 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.362131 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.362140 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x97ms\" (UniqueName: \"kubernetes.io/projected/9fa27ea0-52eb-406f-8256-68b4a471e452-kube-api-access-x97ms\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.382679 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-config" (OuterVolumeSpecName: "config") pod "9fa27ea0-52eb-406f-8256-68b4a471e452" (UID: "9fa27ea0-52eb-406f-8256-68b4a471e452"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.404082 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9fa27ea0-52eb-406f-8256-68b4a471e452" (UID: "9fa27ea0-52eb-406f-8256-68b4a471e452"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.463928 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.463960 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fa27ea0-52eb-406f-8256-68b4a471e452-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.519794 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.520658 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerName="nova-api-api" containerID="cri-o://9d7c08d80dbb246a87244e1998a0a8a3673755ff4e13d9dbf8e24611acf9af5c" gracePeriod=30 Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.520831 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerName="nova-api-log" containerID="cri-o://1c92a80645afa4e46d1addaafa746637c845088f3cba1406f81a67f4dfe1af22" gracePeriod=30 Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.529715 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": EOF" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.529921 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": EOF" Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.573081 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.577626 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0740e570-d27e-4d97-b511-315a9ad45022" containerName="nova-metadata-metadata" containerID="cri-o://56bcb9d42e1b3abd08748801a926fbec8d7021ec641d0c2f8df7fcff3ae44555" gracePeriod=30 Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.577596 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0740e570-d27e-4d97-b511-315a9ad45022" containerName="nova-metadata-log" containerID="cri-o://2064f2d54f436918410771cf684160b85d090a94e656dd353921b47f48e5a9bd" gracePeriod=30 Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.653052 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vlzrl"] Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.675218 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-vlzrl"] Feb 26 11:33:46 crc kubenswrapper[4699]: I0226 11:33:46.834450 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.286409 4699 generic.go:334] "Generic (PLEG): container finished" podID="0740e570-d27e-4d97-b511-315a9ad45022" containerID="56bcb9d42e1b3abd08748801a926fbec8d7021ec641d0c2f8df7fcff3ae44555" exitCode=0 Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.286440 4699 generic.go:334] "Generic (PLEG): container finished" podID="0740e570-d27e-4d97-b511-315a9ad45022" containerID="2064f2d54f436918410771cf684160b85d090a94e656dd353921b47f48e5a9bd" exitCode=143 Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.286458 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0740e570-d27e-4d97-b511-315a9ad45022","Type":"ContainerDied","Data":"56bcb9d42e1b3abd08748801a926fbec8d7021ec641d0c2f8df7fcff3ae44555"} Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.286511 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0740e570-d27e-4d97-b511-315a9ad45022","Type":"ContainerDied","Data":"2064f2d54f436918410771cf684160b85d090a94e656dd353921b47f48e5a9bd"} Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.291949 4699 generic.go:334] "Generic (PLEG): container finished" podID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerID="1c92a80645afa4e46d1addaafa746637c845088f3cba1406f81a67f4dfe1af22" exitCode=143 Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.292008 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3da58d42-6c34-4a38-b9dc-eeeb20542955","Type":"ContainerDied","Data":"1c92a80645afa4e46d1addaafa746637c845088f3cba1406f81a67f4dfe1af22"} Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.515197 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.688410 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-combined-ca-bundle\") pod \"0740e570-d27e-4d97-b511-315a9ad45022\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.688560 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-nova-metadata-tls-certs\") pod \"0740e570-d27e-4d97-b511-315a9ad45022\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.688597 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-config-data\") pod \"0740e570-d27e-4d97-b511-315a9ad45022\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.688652 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0740e570-d27e-4d97-b511-315a9ad45022-logs\") pod \"0740e570-d27e-4d97-b511-315a9ad45022\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.688810 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxwjp\" (UniqueName: \"kubernetes.io/projected/0740e570-d27e-4d97-b511-315a9ad45022-kube-api-access-wxwjp\") pod \"0740e570-d27e-4d97-b511-315a9ad45022\" (UID: \"0740e570-d27e-4d97-b511-315a9ad45022\") " Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.690948 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0740e570-d27e-4d97-b511-315a9ad45022-logs" (OuterVolumeSpecName: "logs") pod "0740e570-d27e-4d97-b511-315a9ad45022" (UID: "0740e570-d27e-4d97-b511-315a9ad45022"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.698581 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0740e570-d27e-4d97-b511-315a9ad45022-kube-api-access-wxwjp" (OuterVolumeSpecName: "kube-api-access-wxwjp") pod "0740e570-d27e-4d97-b511-315a9ad45022" (UID: "0740e570-d27e-4d97-b511-315a9ad45022"). InnerVolumeSpecName "kube-api-access-wxwjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.714323 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0740e570-d27e-4d97-b511-315a9ad45022" (UID: "0740e570-d27e-4d97-b511-315a9ad45022"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.731965 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-config-data" (OuterVolumeSpecName: "config-data") pod "0740e570-d27e-4d97-b511-315a9ad45022" (UID: "0740e570-d27e-4d97-b511-315a9ad45022"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.758914 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0740e570-d27e-4d97-b511-315a9ad45022" (UID: "0740e570-d27e-4d97-b511-315a9ad45022"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.791752 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.791803 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0740e570-d27e-4d97-b511-315a9ad45022-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.791819 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxwjp\" (UniqueName: \"kubernetes.io/projected/0740e570-d27e-4d97-b511-315a9ad45022-kube-api-access-wxwjp\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.791831 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:47 crc kubenswrapper[4699]: I0226 11:33:47.791841 4699 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0740e570-d27e-4d97-b511-315a9ad45022-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.285503 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa27ea0-52eb-406f-8256-68b4a471e452" path="/var/lib/kubelet/pods/9fa27ea0-52eb-406f-8256-68b4a471e452/volumes" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.320377 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="003dad7c-8300-49a9-80d0-99dcad71fa84" containerName="nova-scheduler-scheduler" containerID="cri-o://24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6" gracePeriod=30 Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.321173 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.321182 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0740e570-d27e-4d97-b511-315a9ad45022","Type":"ContainerDied","Data":"599226ea072da23c2b7a52b3304a9265a01ddf31d06132414e61defb38999e48"} Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.321502 4699 scope.go:117] "RemoveContainer" containerID="56bcb9d42e1b3abd08748801a926fbec8d7021ec641d0c2f8df7fcff3ae44555" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.353267 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.362182 4699 scope.go:117] "RemoveContainer" containerID="2064f2d54f436918410771cf684160b85d090a94e656dd353921b47f48e5a9bd" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.373456 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.388565 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.406295 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:48 crc kubenswrapper[4699]: E0226 11:33:48.406815 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa27ea0-52eb-406f-8256-68b4a471e452" containerName="dnsmasq-dns" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.406828 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa27ea0-52eb-406f-8256-68b4a471e452" containerName="dnsmasq-dns" Feb 26 11:33:48 crc kubenswrapper[4699]: E0226 11:33:48.406840 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0740e570-d27e-4d97-b511-315a9ad45022" containerName="nova-metadata-log" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.406847 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="0740e570-d27e-4d97-b511-315a9ad45022" containerName="nova-metadata-log" Feb 26 11:33:48 crc kubenswrapper[4699]: E0226 11:33:48.406875 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa27ea0-52eb-406f-8256-68b4a471e452" containerName="init" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.406881 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa27ea0-52eb-406f-8256-68b4a471e452" containerName="init" Feb 26 11:33:48 crc kubenswrapper[4699]: E0226 11:33:48.406897 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f528c9c1-4318-4d46-9b02-43f955e04009" containerName="nova-manage" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.406905 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f528c9c1-4318-4d46-9b02-43f955e04009" containerName="nova-manage" Feb 26 11:33:48 crc kubenswrapper[4699]: E0226 11:33:48.406918 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0740e570-d27e-4d97-b511-315a9ad45022" containerName="nova-metadata-metadata" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.406924 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="0740e570-d27e-4d97-b511-315a9ad45022" containerName="nova-metadata-metadata" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.407097 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f528c9c1-4318-4d46-9b02-43f955e04009" containerName="nova-manage" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.407128 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="0740e570-d27e-4d97-b511-315a9ad45022" containerName="nova-metadata-log" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.407152 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa27ea0-52eb-406f-8256-68b4a471e452" containerName="dnsmasq-dns" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.407161 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="0740e570-d27e-4d97-b511-315a9ad45022" containerName="nova-metadata-metadata" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.408293 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.415730 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.415933 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.420897 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:48 crc kubenswrapper[4699]: E0226 11:33:48.498408 4699 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0740e570_d27e_4d97_b511_315a9ad45022.slice\": RecentStats: unable to find data in memory cache]" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.514297 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj8g6\" (UniqueName: \"kubernetes.io/projected/c847caf4-446a-4738-88a8-26d1628c91f7-kube-api-access-rj8g6\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.514400 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-config-data\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.514419 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c847caf4-446a-4738-88a8-26d1628c91f7-logs\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.514468 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.514530 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.616894 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.617549 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj8g6\" (UniqueName: \"kubernetes.io/projected/c847caf4-446a-4738-88a8-26d1628c91f7-kube-api-access-rj8g6\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.617752 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-config-data\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.617868 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c847caf4-446a-4738-88a8-26d1628c91f7-logs\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.618016 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.618342 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c847caf4-446a-4738-88a8-26d1628c91f7-logs\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.622418 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.623280 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.626552 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-config-data\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.638736 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj8g6\" (UniqueName: \"kubernetes.io/projected/c847caf4-446a-4738-88a8-26d1628c91f7-kube-api-access-rj8g6\") pod \"nova-metadata-0\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " pod="openstack/nova-metadata-0" Feb 26 11:33:48 crc kubenswrapper[4699]: I0226 11:33:48.732955 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:33:49 crc kubenswrapper[4699]: I0226 11:33:49.293238 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:33:49 crc kubenswrapper[4699]: W0226 11:33:49.308645 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc847caf4_446a_4738_88a8_26d1628c91f7.slice/crio-3e0522c5f3203ed953a2188504efca913e34205ca0ad66ee7025a214978a16b0 WatchSource:0}: Error finding container 3e0522c5f3203ed953a2188504efca913e34205ca0ad66ee7025a214978a16b0: Status 404 returned error can't find the container with id 3e0522c5f3203ed953a2188504efca913e34205ca0ad66ee7025a214978a16b0 Feb 26 11:33:49 crc kubenswrapper[4699]: I0226 11:33:49.333685 4699 generic.go:334] "Generic (PLEG): container finished" podID="b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a" containerID="6a0914a3db1c0b6e1b3a5a9cf2e1d8ac0e44a6dc0eb35fc159954e4b3f365a3d" exitCode=0 Feb 26 11:33:49 crc kubenswrapper[4699]: I0226 11:33:49.333818 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dz84d" event={"ID":"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a","Type":"ContainerDied","Data":"6a0914a3db1c0b6e1b3a5a9cf2e1d8ac0e44a6dc0eb35fc159954e4b3f365a3d"} Feb 26 11:33:49 crc kubenswrapper[4699]: I0226 11:33:49.338442 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c847caf4-446a-4738-88a8-26d1628c91f7","Type":"ContainerStarted","Data":"3e0522c5f3203ed953a2188504efca913e34205ca0ad66ee7025a214978a16b0"} Feb 26 11:33:50 crc kubenswrapper[4699]: E0226 11:33:50.271212 4699 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.272612 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0740e570-d27e-4d97-b511-315a9ad45022" path="/var/lib/kubelet/pods/0740e570-d27e-4d97-b511-315a9ad45022/volumes" Feb 26 11:33:50 crc kubenswrapper[4699]: E0226 11:33:50.273153 4699 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 11:33:50 crc kubenswrapper[4699]: E0226 11:33:50.275436 4699 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 11:33:50 crc kubenswrapper[4699]: E0226 11:33:50.275471 4699 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="003dad7c-8300-49a9-80d0-99dcad71fa84" containerName="nova-scheduler-scheduler" Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.357909 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c847caf4-446a-4738-88a8-26d1628c91f7","Type":"ContainerStarted","Data":"36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81"} Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.357970 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c847caf4-446a-4738-88a8-26d1628c91f7","Type":"ContainerStarted","Data":"7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa"} Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.382693 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.382671304 podStartE2EDuration="2.382671304s" podCreationTimestamp="2026-02-26 11:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:33:50.377832273 +0000 UTC m=+1376.188658727" watchObservedRunningTime="2026-02-26 11:33:50.382671304 +0000 UTC m=+1376.193497738" Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.682548 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.762346 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5wlj\" (UniqueName: \"kubernetes.io/projected/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-kube-api-access-v5wlj\") pod \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.762510 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-scripts\") pod \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.762561 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-config-data\") pod \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.762597 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-combined-ca-bundle\") pod \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\" (UID: \"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a\") " Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.780141 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-kube-api-access-v5wlj" (OuterVolumeSpecName: "kube-api-access-v5wlj") pod "b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a" (UID: "b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a"). InnerVolumeSpecName "kube-api-access-v5wlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.786571 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-scripts" (OuterVolumeSpecName: "scripts") pod "b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a" (UID: "b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.798098 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a" (UID: "b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.819941 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-config-data" (OuterVolumeSpecName: "config-data") pod "b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a" (UID: "b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.866511 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.866561 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.866573 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:50 crc kubenswrapper[4699]: I0226 11:33:50.866583 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5wlj\" (UniqueName: \"kubernetes.io/projected/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a-kube-api-access-v5wlj\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.369055 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dz84d" event={"ID":"b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a","Type":"ContainerDied","Data":"eb1129c3d9136c15c6047e7fc0342dfc057c4e5270886b2bb6ec9a038a974469"} Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.369102 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb1129c3d9136c15c6047e7fc0342dfc057c4e5270886b2bb6ec9a038a974469" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.369104 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dz84d" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.423477 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 11:33:51 crc kubenswrapper[4699]: E0226 11:33:51.423967 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a" containerName="nova-cell1-conductor-db-sync" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.423991 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a" containerName="nova-cell1-conductor-db-sync" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.424261 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a" containerName="nova-cell1-conductor-db-sync" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.425028 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.427276 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.437089 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.476618 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2b3846-c197-4cc6-a442-0f466d97d53d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ff2b3846-c197-4cc6-a442-0f466d97d53d\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.476683 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2b3846-c197-4cc6-a442-0f466d97d53d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ff2b3846-c197-4cc6-a442-0f466d97d53d\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.477498 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbd65\" (UniqueName: \"kubernetes.io/projected/ff2b3846-c197-4cc6-a442-0f466d97d53d-kube-api-access-kbd65\") pod \"nova-cell1-conductor-0\" (UID: \"ff2b3846-c197-4cc6-a442-0f466d97d53d\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.579594 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbd65\" (UniqueName: \"kubernetes.io/projected/ff2b3846-c197-4cc6-a442-0f466d97d53d-kube-api-access-kbd65\") pod \"nova-cell1-conductor-0\" (UID: \"ff2b3846-c197-4cc6-a442-0f466d97d53d\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.579710 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2b3846-c197-4cc6-a442-0f466d97d53d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ff2b3846-c197-4cc6-a442-0f466d97d53d\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.579736 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2b3846-c197-4cc6-a442-0f466d97d53d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ff2b3846-c197-4cc6-a442-0f466d97d53d\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.586303 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2b3846-c197-4cc6-a442-0f466d97d53d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ff2b3846-c197-4cc6-a442-0f466d97d53d\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.588947 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2b3846-c197-4cc6-a442-0f466d97d53d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ff2b3846-c197-4cc6-a442-0f466d97d53d\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.601052 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbd65\" (UniqueName: \"kubernetes.io/projected/ff2b3846-c197-4cc6-a442-0f466d97d53d-kube-api-access-kbd65\") pod \"nova-cell1-conductor-0\" (UID: \"ff2b3846-c197-4cc6-a442-0f466d97d53d\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.807303 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.922465 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.987424 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-config-data\") pod \"003dad7c-8300-49a9-80d0-99dcad71fa84\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.988612 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znmv6\" (UniqueName: \"kubernetes.io/projected/003dad7c-8300-49a9-80d0-99dcad71fa84-kube-api-access-znmv6\") pod \"003dad7c-8300-49a9-80d0-99dcad71fa84\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.988706 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-combined-ca-bundle\") pod \"003dad7c-8300-49a9-80d0-99dcad71fa84\" (UID: \"003dad7c-8300-49a9-80d0-99dcad71fa84\") " Feb 26 11:33:51 crc kubenswrapper[4699]: I0226 11:33:51.996076 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/003dad7c-8300-49a9-80d0-99dcad71fa84-kube-api-access-znmv6" (OuterVolumeSpecName: "kube-api-access-znmv6") pod "003dad7c-8300-49a9-80d0-99dcad71fa84" (UID: "003dad7c-8300-49a9-80d0-99dcad71fa84"). InnerVolumeSpecName "kube-api-access-znmv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.024270 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "003dad7c-8300-49a9-80d0-99dcad71fa84" (UID: "003dad7c-8300-49a9-80d0-99dcad71fa84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.030301 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-config-data" (OuterVolumeSpecName: "config-data") pod "003dad7c-8300-49a9-80d0-99dcad71fa84" (UID: "003dad7c-8300-49a9-80d0-99dcad71fa84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.092425 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.092742 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znmv6\" (UniqueName: \"kubernetes.io/projected/003dad7c-8300-49a9-80d0-99dcad71fa84-kube-api-access-znmv6\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.092825 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003dad7c-8300-49a9-80d0-99dcad71fa84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.273323 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 11:33:52 crc kubenswrapper[4699]: W0226 11:33:52.276759 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff2b3846_c197_4cc6_a442_0f466d97d53d.slice/crio-27a0aa6703826927aa2df442d9029c896cbbf644f8979f33f9b71125f0032906 WatchSource:0}: Error finding container 27a0aa6703826927aa2df442d9029c896cbbf644f8979f33f9b71125f0032906: Status 404 returned error can't find the container with id 27a0aa6703826927aa2df442d9029c896cbbf644f8979f33f9b71125f0032906 Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.382310 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.382488 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf" containerName="kube-state-metrics" containerID="cri-o://d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b" gracePeriod=30 Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.384649 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ff2b3846-c197-4cc6-a442-0f466d97d53d","Type":"ContainerStarted","Data":"27a0aa6703826927aa2df442d9029c896cbbf644f8979f33f9b71125f0032906"} Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.387827 4699 generic.go:334] "Generic (PLEG): container finished" podID="003dad7c-8300-49a9-80d0-99dcad71fa84" containerID="24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6" exitCode=0 Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.387967 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"003dad7c-8300-49a9-80d0-99dcad71fa84","Type":"ContainerDied","Data":"24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6"} Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.388047 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"003dad7c-8300-49a9-80d0-99dcad71fa84","Type":"ContainerDied","Data":"1ffae0e8d9c34eeb99720f9aac223ffd7663c8e780138c2fbc15f472417c5fda"} Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.388142 4699 scope.go:117] "RemoveContainer" containerID="24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.388321 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.412437 4699 scope.go:117] "RemoveContainer" containerID="24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6" Feb 26 11:33:52 crc kubenswrapper[4699]: E0226 11:33:52.412989 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6\": container with ID starting with 24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6 not found: ID does not exist" containerID="24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.413090 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6"} err="failed to get container status \"24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6\": rpc error: code = NotFound desc = could not find container \"24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6\": container with ID starting with 24ef43097973fb5b65cb505842a287bfdce1031cc145b5b783382e2c06e63cf6 not found: ID does not exist" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.494284 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.505067 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.513917 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:33:52 crc kubenswrapper[4699]: E0226 11:33:52.515604 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003dad7c-8300-49a9-80d0-99dcad71fa84" containerName="nova-scheduler-scheduler" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.515626 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="003dad7c-8300-49a9-80d0-99dcad71fa84" containerName="nova-scheduler-scheduler" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.515799 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="003dad7c-8300-49a9-80d0-99dcad71fa84" containerName="nova-scheduler-scheduler" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.516424 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.519362 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.535474 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.603043 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-config-data\") pod \"nova-scheduler-0\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.603208 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.603264 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85xhh\" (UniqueName: \"kubernetes.io/projected/a8bf50ee-a389-4a35-8899-81d885e1ec38-kube-api-access-85xhh\") pod \"nova-scheduler-0\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.706609 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85xhh\" (UniqueName: \"kubernetes.io/projected/a8bf50ee-a389-4a35-8899-81d885e1ec38-kube-api-access-85xhh\") pod \"nova-scheduler-0\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.706781 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-config-data\") pod \"nova-scheduler-0\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.706884 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.711779 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-config-data\") pod \"nova-scheduler-0\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.714820 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.751005 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85xhh\" (UniqueName: \"kubernetes.io/projected/a8bf50ee-a389-4a35-8899-81d885e1ec38-kube-api-access-85xhh\") pod \"nova-scheduler-0\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.844783 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:33:52 crc kubenswrapper[4699]: I0226 11:33:52.922343 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.012036 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4vnz\" (UniqueName: \"kubernetes.io/projected/2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf-kube-api-access-r4vnz\") pod \"2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf\" (UID: \"2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf\") " Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.016799 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf-kube-api-access-r4vnz" (OuterVolumeSpecName: "kube-api-access-r4vnz") pod "2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf" (UID: "2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf"). InnerVolumeSpecName "kube-api-access-r4vnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.114080 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4vnz\" (UniqueName: \"kubernetes.io/projected/2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf-kube-api-access-r4vnz\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.393418 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.406517 4699 generic.go:334] "Generic (PLEG): container finished" podID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerID="9d7c08d80dbb246a87244e1998a0a8a3673755ff4e13d9dbf8e24611acf9af5c" exitCode=0 Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.406908 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3da58d42-6c34-4a38-b9dc-eeeb20542955","Type":"ContainerDied","Data":"9d7c08d80dbb246a87244e1998a0a8a3673755ff4e13d9dbf8e24611acf9af5c"} Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.411549 4699 generic.go:334] "Generic (PLEG): container finished" podID="2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf" containerID="d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b" exitCode=2 Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.411728 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.411879 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf","Type":"ContainerDied","Data":"d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b"} Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.411979 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf","Type":"ContainerDied","Data":"d71534977c30792b789d4e1ac180ec5af3f9ed3738ad0ab651747396010424ea"} Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.412070 4699 scope.go:117] "RemoveContainer" containerID="d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.433416 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ff2b3846-c197-4cc6-a442-0f466d97d53d","Type":"ContainerStarted","Data":"9dd8780b5b90628e97e8e5acf91fb3f6e703343d5528f5eff51fd3ebf041878e"} Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.433701 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.435258 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.465610 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.465580155 podStartE2EDuration="2.465580155s" podCreationTimestamp="2026-02-26 11:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:33:53.450716434 +0000 UTC m=+1379.261542878" watchObservedRunningTime="2026-02-26 11:33:53.465580155 +0000 UTC m=+1379.276406599" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.477552 4699 scope.go:117] "RemoveContainer" containerID="d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b" Feb 26 11:33:53 crc kubenswrapper[4699]: E0226 11:33:53.480717 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b\": container with ID starting with d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b not found: ID does not exist" containerID="d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.480790 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b"} err="failed to get container status \"d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b\": rpc error: code = NotFound desc = could not find container \"d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b\": container with ID starting with d2f101e4ea8ddf069d258e4d0b303b0807a2a990a84939a9396f47359ac82f3b not found: ID does not exist" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.513311 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.524239 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-config-data\") pod \"3da58d42-6c34-4a38-b9dc-eeeb20542955\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.524685 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da58d42-6c34-4a38-b9dc-eeeb20542955-logs\") pod \"3da58d42-6c34-4a38-b9dc-eeeb20542955\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.524860 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrt7n\" (UniqueName: \"kubernetes.io/projected/3da58d42-6c34-4a38-b9dc-eeeb20542955-kube-api-access-qrt7n\") pod \"3da58d42-6c34-4a38-b9dc-eeeb20542955\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.525004 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-combined-ca-bundle\") pod \"3da58d42-6c34-4a38-b9dc-eeeb20542955\" (UID: \"3da58d42-6c34-4a38-b9dc-eeeb20542955\") " Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.526346 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.527276 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da58d42-6c34-4a38-b9dc-eeeb20542955-logs" (OuterVolumeSpecName: "logs") pod "3da58d42-6c34-4a38-b9dc-eeeb20542955" (UID: "3da58d42-6c34-4a38-b9dc-eeeb20542955"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.531158 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da58d42-6c34-4a38-b9dc-eeeb20542955-kube-api-access-qrt7n" (OuterVolumeSpecName: "kube-api-access-qrt7n") pod "3da58d42-6c34-4a38-b9dc-eeeb20542955" (UID: "3da58d42-6c34-4a38-b9dc-eeeb20542955"). InnerVolumeSpecName "kube-api-access-qrt7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.534422 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:33:53 crc kubenswrapper[4699]: E0226 11:33:53.534983 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf" containerName="kube-state-metrics" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.535010 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf" containerName="kube-state-metrics" Feb 26 11:33:53 crc kubenswrapper[4699]: E0226 11:33:53.535052 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerName="nova-api-log" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.535061 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerName="nova-api-log" Feb 26 11:33:53 crc kubenswrapper[4699]: E0226 11:33:53.535083 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerName="nova-api-api" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.535091 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerName="nova-api-api" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.535329 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerName="nova-api-log" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.535352 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da58d42-6c34-4a38-b9dc-eeeb20542955" containerName="nova-api-api" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.535363 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf" containerName="kube-state-metrics" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.536440 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.538620 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.539019 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.545586 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.568730 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-config-data" (OuterVolumeSpecName: "config-data") pod "3da58d42-6c34-4a38-b9dc-eeeb20542955" (UID: "3da58d42-6c34-4a38-b9dc-eeeb20542955"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.568992 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3da58d42-6c34-4a38-b9dc-eeeb20542955" (UID: "3da58d42-6c34-4a38-b9dc-eeeb20542955"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.627073 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c685fadd-b283-40bc-9de2-3372317b9875-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.627232 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c685fadd-b283-40bc-9de2-3372317b9875-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.627282 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs6jm\" (UniqueName: \"kubernetes.io/projected/c685fadd-b283-40bc-9de2-3372317b9875-kube-api-access-fs6jm\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.627340 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c685fadd-b283-40bc-9de2-3372317b9875-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.627425 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da58d42-6c34-4a38-b9dc-eeeb20542955-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.627438 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrt7n\" (UniqueName: \"kubernetes.io/projected/3da58d42-6c34-4a38-b9dc-eeeb20542955-kube-api-access-qrt7n\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.627447 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.627455 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da58d42-6c34-4a38-b9dc-eeeb20542955-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.729431 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c685fadd-b283-40bc-9de2-3372317b9875-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.729817 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs6jm\" (UniqueName: \"kubernetes.io/projected/c685fadd-b283-40bc-9de2-3372317b9875-kube-api-access-fs6jm\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.729895 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c685fadd-b283-40bc-9de2-3372317b9875-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.729964 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c685fadd-b283-40bc-9de2-3372317b9875-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.733291 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.733324 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.734679 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c685fadd-b283-40bc-9de2-3372317b9875-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.735901 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c685fadd-b283-40bc-9de2-3372317b9875-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.743770 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c685fadd-b283-40bc-9de2-3372317b9875-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.753318 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs6jm\" (UniqueName: \"kubernetes.io/projected/c685fadd-b283-40bc-9de2-3372317b9875-kube-api-access-fs6jm\") pod \"kube-state-metrics-0\" (UID: \"c685fadd-b283-40bc-9de2-3372317b9875\") " pod="openstack/kube-state-metrics-0" Feb 26 11:33:53 crc kubenswrapper[4699]: I0226 11:33:53.855620 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.271421 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="003dad7c-8300-49a9-80d0-99dcad71fa84" path="/var/lib/kubelet/pods/003dad7c-8300-49a9-80d0-99dcad71fa84/volumes" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.272201 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf" path="/var/lib/kubelet/pods/2ef8c2b0-a7bb-41f7-9b6e-e0e8283428cf/volumes" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.299787 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:33:54 crc kubenswrapper[4699]: W0226 11:33:54.301221 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc685fadd_b283_40bc_9de2_3372317b9875.slice/crio-5063064ca1b59884e4dbb2b88c753d4b5430e522d28b2e9ed2abe45f1c48a096 WatchSource:0}: Error finding container 5063064ca1b59884e4dbb2b88c753d4b5430e522d28b2e9ed2abe45f1c48a096: Status 404 returned error can't find the container with id 5063064ca1b59884e4dbb2b88c753d4b5430e522d28b2e9ed2abe45f1c48a096 Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.398634 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.398976 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="ceilometer-central-agent" containerID="cri-o://04dcf7e8e201497d1cf45ed5c29abe7b3a178bdefc6cc1cf11f3cbae4131ffe7" gracePeriod=30 Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.399128 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="sg-core" containerID="cri-o://0a5d89be89958727d068c6f547173e1db9e09eeaa55949f9e4b10646a2418098" gracePeriod=30 Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.399185 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="proxy-httpd" containerID="cri-o://69f610b3a3266f67627b13f99326d06ba576d343b85ee61005b092a805c73f19" gracePeriod=30 Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.399134 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="ceilometer-notification-agent" containerID="cri-o://1e8f4ef353b62f20e3fa0c0b216ab5527d39228a1eccacac6ba930465493a7ed" gracePeriod=30 Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.445385 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3da58d42-6c34-4a38-b9dc-eeeb20542955","Type":"ContainerDied","Data":"0001ca71eaea85ec4a7157192b885fb03750c2a30c308dc7404b715439e990b4"} Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.445432 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.445453 4699 scope.go:117] "RemoveContainer" containerID="9d7c08d80dbb246a87244e1998a0a8a3673755ff4e13d9dbf8e24611acf9af5c" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.453572 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c685fadd-b283-40bc-9de2-3372317b9875","Type":"ContainerStarted","Data":"5063064ca1b59884e4dbb2b88c753d4b5430e522d28b2e9ed2abe45f1c48a096"} Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.455267 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a8bf50ee-a389-4a35-8899-81d885e1ec38","Type":"ContainerStarted","Data":"c97d92e712559c7220f14c08b215ac1ea015fa517ad26257d3edff7ee08e2ec0"} Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.455334 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a8bf50ee-a389-4a35-8899-81d885e1ec38","Type":"ContainerStarted","Data":"35561c1ff93cac360e0003512da7f67e357d0a40bd9387c2cdd037287561205d"} Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.470328 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.474789 4699 scope.go:117] "RemoveContainer" containerID="1c92a80645afa4e46d1addaafa746637c845088f3cba1406f81a67f4dfe1af22" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.482328 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.491194 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.493067 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.493049423 podStartE2EDuration="2.493049423s" podCreationTimestamp="2026-02-26 11:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:33:54.481681093 +0000 UTC m=+1380.292507527" watchObservedRunningTime="2026-02-26 11:33:54.493049423 +0000 UTC m=+1380.303875847" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.493181 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.497026 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.523737 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.548452 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5783da86-2f9d-42da-ae1e-7df1f4190892-logs\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.548541 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvq5n\" (UniqueName: \"kubernetes.io/projected/5783da86-2f9d-42da-ae1e-7df1f4190892-kube-api-access-wvq5n\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.548692 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-config-data\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.548744 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.650921 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5783da86-2f9d-42da-ae1e-7df1f4190892-logs\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.651243 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvq5n\" (UniqueName: \"kubernetes.io/projected/5783da86-2f9d-42da-ae1e-7df1f4190892-kube-api-access-wvq5n\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.651422 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-config-data\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.651545 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.653049 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5783da86-2f9d-42da-ae1e-7df1f4190892-logs\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.662206 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-config-data\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.662542 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.669560 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvq5n\" (UniqueName: \"kubernetes.io/projected/5783da86-2f9d-42da-ae1e-7df1f4190892-kube-api-access-wvq5n\") pod \"nova-api-0\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " pod="openstack/nova-api-0" Feb 26 11:33:54 crc kubenswrapper[4699]: I0226 11:33:54.823721 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:33:55 crc kubenswrapper[4699]: I0226 11:33:55.395891 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:33:55 crc kubenswrapper[4699]: W0226 11:33:55.396808 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5783da86_2f9d_42da_ae1e_7df1f4190892.slice/crio-643f5d9494a203696bd97ca6ac87808480d0591a9fdea1382c0442db74ffeabf WatchSource:0}: Error finding container 643f5d9494a203696bd97ca6ac87808480d0591a9fdea1382c0442db74ffeabf: Status 404 returned error can't find the container with id 643f5d9494a203696bd97ca6ac87808480d0591a9fdea1382c0442db74ffeabf Feb 26 11:33:55 crc kubenswrapper[4699]: I0226 11:33:55.470435 4699 generic.go:334] "Generic (PLEG): container finished" podID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerID="69f610b3a3266f67627b13f99326d06ba576d343b85ee61005b092a805c73f19" exitCode=0 Feb 26 11:33:55 crc kubenswrapper[4699]: I0226 11:33:55.471940 4699 generic.go:334] "Generic (PLEG): container finished" podID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerID="0a5d89be89958727d068c6f547173e1db9e09eeaa55949f9e4b10646a2418098" exitCode=2 Feb 26 11:33:55 crc kubenswrapper[4699]: I0226 11:33:55.472044 4699 generic.go:334] "Generic (PLEG): container finished" podID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerID="04dcf7e8e201497d1cf45ed5c29abe7b3a178bdefc6cc1cf11f3cbae4131ffe7" exitCode=0 Feb 26 11:33:55 crc kubenswrapper[4699]: I0226 11:33:55.470606 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b59e03f-0c75-40b0-9eb3-d5113163f420","Type":"ContainerDied","Data":"69f610b3a3266f67627b13f99326d06ba576d343b85ee61005b092a805c73f19"} Feb 26 11:33:55 crc kubenswrapper[4699]: I0226 11:33:55.472320 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b59e03f-0c75-40b0-9eb3-d5113163f420","Type":"ContainerDied","Data":"0a5d89be89958727d068c6f547173e1db9e09eeaa55949f9e4b10646a2418098"} Feb 26 11:33:55 crc kubenswrapper[4699]: I0226 11:33:55.472447 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b59e03f-0c75-40b0-9eb3-d5113163f420","Type":"ContainerDied","Data":"04dcf7e8e201497d1cf45ed5c29abe7b3a178bdefc6cc1cf11f3cbae4131ffe7"} Feb 26 11:33:55 crc kubenswrapper[4699]: I0226 11:33:55.474949 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c685fadd-b283-40bc-9de2-3372317b9875","Type":"ContainerStarted","Data":"e9e80eb50f4f804f9b27d0cd5128479b0efd273dc49d2eace097d699b1117db5"} Feb 26 11:33:55 crc kubenswrapper[4699]: I0226 11:33:55.475173 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 11:33:55 crc kubenswrapper[4699]: I0226 11:33:55.478593 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5783da86-2f9d-42da-ae1e-7df1f4190892","Type":"ContainerStarted","Data":"643f5d9494a203696bd97ca6ac87808480d0591a9fdea1382c0442db74ffeabf"} Feb 26 11:33:55 crc kubenswrapper[4699]: I0226 11:33:55.492761 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.081343088 podStartE2EDuration="2.492722135s" podCreationTimestamp="2026-02-26 11:33:53 +0000 UTC" firstStartedPulling="2026-02-26 11:33:54.303827378 +0000 UTC m=+1380.114653812" lastFinishedPulling="2026-02-26 11:33:54.715206425 +0000 UTC m=+1380.526032859" observedRunningTime="2026-02-26 11:33:55.492332764 +0000 UTC m=+1381.303159218" watchObservedRunningTime="2026-02-26 11:33:55.492722135 +0000 UTC m=+1381.303548589" Feb 26 11:33:56 crc kubenswrapper[4699]: I0226 11:33:56.272064 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da58d42-6c34-4a38-b9dc-eeeb20542955" path="/var/lib/kubelet/pods/3da58d42-6c34-4a38-b9dc-eeeb20542955/volumes" Feb 26 11:33:56 crc kubenswrapper[4699]: I0226 11:33:56.502967 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5783da86-2f9d-42da-ae1e-7df1f4190892","Type":"ContainerStarted","Data":"7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4"} Feb 26 11:33:56 crc kubenswrapper[4699]: I0226 11:33:56.503008 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5783da86-2f9d-42da-ae1e-7df1f4190892","Type":"ContainerStarted","Data":"5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd"} Feb 26 11:33:56 crc kubenswrapper[4699]: I0226 11:33:56.521580 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.521555224 podStartE2EDuration="2.521555224s" podCreationTimestamp="2026-02-26 11:33:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:33:56.521168143 +0000 UTC m=+1382.331994577" watchObservedRunningTime="2026-02-26 11:33:56.521555224 +0000 UTC m=+1382.332381668" Feb 26 11:33:57 crc kubenswrapper[4699]: I0226 11:33:57.845108 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.522623 4699 generic.go:334] "Generic (PLEG): container finished" podID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerID="1e8f4ef353b62f20e3fa0c0b216ab5527d39228a1eccacac6ba930465493a7ed" exitCode=0 Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.522847 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b59e03f-0c75-40b0-9eb3-d5113163f420","Type":"ContainerDied","Data":"1e8f4ef353b62f20e3fa0c0b216ab5527d39228a1eccacac6ba930465493a7ed"} Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.687376 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.736179 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.736235 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.739667 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-config-data\") pod \"4b59e03f-0c75-40b0-9eb3-d5113163f420\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.739744 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-combined-ca-bundle\") pod \"4b59e03f-0c75-40b0-9eb3-d5113163f420\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.739845 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-sg-core-conf-yaml\") pod \"4b59e03f-0c75-40b0-9eb3-d5113163f420\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.739976 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-scripts\") pod \"4b59e03f-0c75-40b0-9eb3-d5113163f420\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.740075 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-run-httpd\") pod \"4b59e03f-0c75-40b0-9eb3-d5113163f420\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.740175 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwdvm\" (UniqueName: \"kubernetes.io/projected/4b59e03f-0c75-40b0-9eb3-d5113163f420-kube-api-access-jwdvm\") pod \"4b59e03f-0c75-40b0-9eb3-d5113163f420\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.740367 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-log-httpd\") pod \"4b59e03f-0c75-40b0-9eb3-d5113163f420\" (UID: \"4b59e03f-0c75-40b0-9eb3-d5113163f420\") " Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.743091 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4b59e03f-0c75-40b0-9eb3-d5113163f420" (UID: "4b59e03f-0c75-40b0-9eb3-d5113163f420"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.743841 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4b59e03f-0c75-40b0-9eb3-d5113163f420" (UID: "4b59e03f-0c75-40b0-9eb3-d5113163f420"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.757820 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-scripts" (OuterVolumeSpecName: "scripts") pod "4b59e03f-0c75-40b0-9eb3-d5113163f420" (UID: "4b59e03f-0c75-40b0-9eb3-d5113163f420"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.770583 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b59e03f-0c75-40b0-9eb3-d5113163f420-kube-api-access-jwdvm" (OuterVolumeSpecName: "kube-api-access-jwdvm") pod "4b59e03f-0c75-40b0-9eb3-d5113163f420" (UID: "4b59e03f-0c75-40b0-9eb3-d5113163f420"). InnerVolumeSpecName "kube-api-access-jwdvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.794944 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4b59e03f-0c75-40b0-9eb3-d5113163f420" (UID: "4b59e03f-0c75-40b0-9eb3-d5113163f420"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.845668 4699 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.845709 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.845722 4699 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.845734 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwdvm\" (UniqueName: \"kubernetes.io/projected/4b59e03f-0c75-40b0-9eb3-d5113163f420-kube-api-access-jwdvm\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.845747 4699 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b59e03f-0c75-40b0-9eb3-d5113163f420-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.851964 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b59e03f-0c75-40b0-9eb3-d5113163f420" (UID: "4b59e03f-0c75-40b0-9eb3-d5113163f420"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.883894 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-config-data" (OuterVolumeSpecName: "config-data") pod "4b59e03f-0c75-40b0-9eb3-d5113163f420" (UID: "4b59e03f-0c75-40b0-9eb3-d5113163f420"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.947483 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:58 crc kubenswrapper[4699]: I0226 11:33:58.947531 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b59e03f-0c75-40b0-9eb3-d5113163f420-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.534496 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b59e03f-0c75-40b0-9eb3-d5113163f420","Type":"ContainerDied","Data":"0e70588ca2c32d2c8bf18e61605cae154752eb6909030e1d1477c1cf1b1f9f0c"} Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.534585 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.534828 4699 scope.go:117] "RemoveContainer" containerID="69f610b3a3266f67627b13f99326d06ba576d343b85ee61005b092a805c73f19" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.554882 4699 scope.go:117] "RemoveContainer" containerID="0a5d89be89958727d068c6f547173e1db9e09eeaa55949f9e4b10646a2418098" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.583325 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.584210 4699 scope.go:117] "RemoveContainer" containerID="1e8f4ef353b62f20e3fa0c0b216ab5527d39228a1eccacac6ba930465493a7ed" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.591282 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.612932 4699 scope.go:117] "RemoveContainer" containerID="04dcf7e8e201497d1cf45ed5c29abe7b3a178bdefc6cc1cf11f3cbae4131ffe7" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.643174 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:59 crc kubenswrapper[4699]: E0226 11:33:59.643601 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="sg-core" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.643618 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="sg-core" Feb 26 11:33:59 crc kubenswrapper[4699]: E0226 11:33:59.643642 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="proxy-httpd" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.643649 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="proxy-httpd" Feb 26 11:33:59 crc kubenswrapper[4699]: E0226 11:33:59.643665 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="ceilometer-central-agent" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.643673 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="ceilometer-central-agent" Feb 26 11:33:59 crc kubenswrapper[4699]: E0226 11:33:59.643684 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="ceilometer-notification-agent" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.643691 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="ceilometer-notification-agent" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.643917 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="sg-core" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.643942 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="proxy-httpd" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.643960 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="ceilometer-central-agent" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.643977 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" containerName="ceilometer-notification-agent" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.647589 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.651316 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.654530 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.654769 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.655477 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.762900 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.762972 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-config-data\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.763040 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-scripts\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.763080 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.763152 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl5n4\" (UniqueName: \"kubernetes.io/projected/48cbc02a-15d3-4ae1-852f-24658804939b-kube-api-access-jl5n4\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.763195 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.763237 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-log-httpd\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.763275 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-run-httpd\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.770863 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.770843 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.866550 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.866633 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-log-httpd\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.866665 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-run-httpd\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.866724 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.866751 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-config-data\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.866793 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-scripts\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.866823 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.866859 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl5n4\" (UniqueName: \"kubernetes.io/projected/48cbc02a-15d3-4ae1-852f-24658804939b-kube-api-access-jl5n4\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.868703 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-log-httpd\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.868811 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-run-httpd\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.872918 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.872938 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.874099 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-config-data\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.874732 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.887513 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-scripts\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.891961 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl5n4\" (UniqueName: \"kubernetes.io/projected/48cbc02a-15d3-4ae1-852f-24658804939b-kube-api-access-jl5n4\") pod \"ceilometer-0\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " pod="openstack/ceilometer-0" Feb 26 11:33:59 crc kubenswrapper[4699]: I0226 11:33:59.966595 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.137606 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535094-ccf5t"] Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.140018 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535094-ccf5t" Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.143279 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.143464 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.143581 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.150986 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535094-ccf5t"] Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.182890 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfbd9\" (UniqueName: \"kubernetes.io/projected/a63cbb99-64c1-46fe-99eb-0d06cc310cba-kube-api-access-xfbd9\") pod \"auto-csr-approver-29535094-ccf5t\" (UID: \"a63cbb99-64c1-46fe-99eb-0d06cc310cba\") " pod="openshift-infra/auto-csr-approver-29535094-ccf5t" Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.275186 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b59e03f-0c75-40b0-9eb3-d5113163f420" path="/var/lib/kubelet/pods/4b59e03f-0c75-40b0-9eb3-d5113163f420/volumes" Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.285961 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfbd9\" (UniqueName: \"kubernetes.io/projected/a63cbb99-64c1-46fe-99eb-0d06cc310cba-kube-api-access-xfbd9\") pod \"auto-csr-approver-29535094-ccf5t\" (UID: \"a63cbb99-64c1-46fe-99eb-0d06cc310cba\") " pod="openshift-infra/auto-csr-approver-29535094-ccf5t" Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.305844 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfbd9\" (UniqueName: \"kubernetes.io/projected/a63cbb99-64c1-46fe-99eb-0d06cc310cba-kube-api-access-xfbd9\") pod \"auto-csr-approver-29535094-ccf5t\" (UID: \"a63cbb99-64c1-46fe-99eb-0d06cc310cba\") " pod="openshift-infra/auto-csr-approver-29535094-ccf5t" Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.471752 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535094-ccf5t" Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.536560 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:00 crc kubenswrapper[4699]: I0226 11:34:00.947370 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535094-ccf5t"] Feb 26 11:34:01 crc kubenswrapper[4699]: I0226 11:34:01.554964 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cbc02a-15d3-4ae1-852f-24658804939b","Type":"ContainerStarted","Data":"370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3"} Feb 26 11:34:01 crc kubenswrapper[4699]: I0226 11:34:01.555399 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cbc02a-15d3-4ae1-852f-24658804939b","Type":"ContainerStarted","Data":"68d0a1dbee3e4e680859b8d4e019b458f07850a3366f9a21d8ad3957b8f3d34a"} Feb 26 11:34:01 crc kubenswrapper[4699]: I0226 11:34:01.556668 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535094-ccf5t" event={"ID":"a63cbb99-64c1-46fe-99eb-0d06cc310cba","Type":"ContainerStarted","Data":"e03c8581bed6014bcc595dc5801f6720cf5965259cb812efd65758bd0cf0dcb7"} Feb 26 11:34:01 crc kubenswrapper[4699]: I0226 11:34:01.842259 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:02 crc kubenswrapper[4699]: I0226 11:34:02.567338 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cbc02a-15d3-4ae1-852f-24658804939b","Type":"ContainerStarted","Data":"f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5"} Feb 26 11:34:02 crc kubenswrapper[4699]: I0226 11:34:02.568926 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535094-ccf5t" event={"ID":"a63cbb99-64c1-46fe-99eb-0d06cc310cba","Type":"ContainerStarted","Data":"2fbcb8eac2ddc22c3ecc04313ce75c8a329d85e31714a8bfe7dae5bd6310f0ad"} Feb 26 11:34:02 crc kubenswrapper[4699]: I0226 11:34:02.586950 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535094-ccf5t" podStartSLOduration=1.4889258810000001 podStartE2EDuration="2.586919258s" podCreationTimestamp="2026-02-26 11:34:00 +0000 UTC" firstStartedPulling="2026-02-26 11:34:00.943416438 +0000 UTC m=+1386.754242862" lastFinishedPulling="2026-02-26 11:34:02.041409805 +0000 UTC m=+1387.852236239" observedRunningTime="2026-02-26 11:34:02.583037145 +0000 UTC m=+1388.393863609" watchObservedRunningTime="2026-02-26 11:34:02.586919258 +0000 UTC m=+1388.397745702" Feb 26 11:34:02 crc kubenswrapper[4699]: I0226 11:34:02.845163 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 11:34:02 crc kubenswrapper[4699]: I0226 11:34:02.875866 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 11:34:03 crc kubenswrapper[4699]: I0226 11:34:03.579872 4699 generic.go:334] "Generic (PLEG): container finished" podID="a63cbb99-64c1-46fe-99eb-0d06cc310cba" containerID="2fbcb8eac2ddc22c3ecc04313ce75c8a329d85e31714a8bfe7dae5bd6310f0ad" exitCode=0 Feb 26 11:34:03 crc kubenswrapper[4699]: I0226 11:34:03.579952 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535094-ccf5t" event={"ID":"a63cbb99-64c1-46fe-99eb-0d06cc310cba","Type":"ContainerDied","Data":"2fbcb8eac2ddc22c3ecc04313ce75c8a329d85e31714a8bfe7dae5bd6310f0ad"} Feb 26 11:34:03 crc kubenswrapper[4699]: I0226 11:34:03.581975 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cbc02a-15d3-4ae1-852f-24658804939b","Type":"ContainerStarted","Data":"742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b"} Feb 26 11:34:03 crc kubenswrapper[4699]: I0226 11:34:03.623789 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 11:34:03 crc kubenswrapper[4699]: I0226 11:34:03.864426 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 11:34:04 crc kubenswrapper[4699]: I0226 11:34:04.825072 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:34:04 crc kubenswrapper[4699]: I0226 11:34:04.825429 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:34:04 crc kubenswrapper[4699]: I0226 11:34:04.996203 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535094-ccf5t" Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.085025 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfbd9\" (UniqueName: \"kubernetes.io/projected/a63cbb99-64c1-46fe-99eb-0d06cc310cba-kube-api-access-xfbd9\") pod \"a63cbb99-64c1-46fe-99eb-0d06cc310cba\" (UID: \"a63cbb99-64c1-46fe-99eb-0d06cc310cba\") " Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.091319 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a63cbb99-64c1-46fe-99eb-0d06cc310cba-kube-api-access-xfbd9" (OuterVolumeSpecName: "kube-api-access-xfbd9") pod "a63cbb99-64c1-46fe-99eb-0d06cc310cba" (UID: "a63cbb99-64c1-46fe-99eb-0d06cc310cba"). InnerVolumeSpecName "kube-api-access-xfbd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.187785 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfbd9\" (UniqueName: \"kubernetes.io/projected/a63cbb99-64c1-46fe-99eb-0d06cc310cba-kube-api-access-xfbd9\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.625744 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535094-ccf5t" Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.626251 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535094-ccf5t" event={"ID":"a63cbb99-64c1-46fe-99eb-0d06cc310cba","Type":"ContainerDied","Data":"e03c8581bed6014bcc595dc5801f6720cf5965259cb812efd65758bd0cf0dcb7"} Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.626327 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e03c8581bed6014bcc595dc5801f6720cf5965259cb812efd65758bd0cf0dcb7" Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.646290 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cbc02a-15d3-4ae1-852f-24658804939b","Type":"ContainerStarted","Data":"7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569"} Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.646808 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.708740 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535088-rwpx5"] Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.721729 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535088-rwpx5"] Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.731641 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.783543847 podStartE2EDuration="6.731612972s" podCreationTimestamp="2026-02-26 11:33:59 +0000 UTC" firstStartedPulling="2026-02-26 11:34:00.557069459 +0000 UTC m=+1386.367895893" lastFinishedPulling="2026-02-26 11:34:04.505138584 +0000 UTC m=+1390.315965018" observedRunningTime="2026-02-26 11:34:05.675768141 +0000 UTC m=+1391.486594585" watchObservedRunningTime="2026-02-26 11:34:05.731612972 +0000 UTC m=+1391.542439416" Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.908326 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:05 crc kubenswrapper[4699]: I0226 11:34:05.908339 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:06 crc kubenswrapper[4699]: I0226 11:34:06.272744 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d05b2b3d-2906-4acc-aaa2-2f2674e46f27" path="/var/lib/kubelet/pods/d05b2b3d-2906-4acc-aaa2-2f2674e46f27/volumes" Feb 26 11:34:08 crc kubenswrapper[4699]: I0226 11:34:08.738840 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 11:34:08 crc kubenswrapper[4699]: I0226 11:34:08.741493 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 11:34:08 crc kubenswrapper[4699]: I0226 11:34:08.745647 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 11:34:09 crc kubenswrapper[4699]: I0226 11:34:09.699834 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.535461 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.598102 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp7zp\" (UniqueName: \"kubernetes.io/projected/6d5f37fe-0099-471b-9192-5f52735977b1-kube-api-access-mp7zp\") pod \"6d5f37fe-0099-471b-9192-5f52735977b1\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.598363 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-config-data\") pod \"6d5f37fe-0099-471b-9192-5f52735977b1\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.598420 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-combined-ca-bundle\") pod \"6d5f37fe-0099-471b-9192-5f52735977b1\" (UID: \"6d5f37fe-0099-471b-9192-5f52735977b1\") " Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.620000 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5f37fe-0099-471b-9192-5f52735977b1-kube-api-access-mp7zp" (OuterVolumeSpecName: "kube-api-access-mp7zp") pod "6d5f37fe-0099-471b-9192-5f52735977b1" (UID: "6d5f37fe-0099-471b-9192-5f52735977b1"). InnerVolumeSpecName "kube-api-access-mp7zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.632002 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d5f37fe-0099-471b-9192-5f52735977b1" (UID: "6d5f37fe-0099-471b-9192-5f52735977b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.656379 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-config-data" (OuterVolumeSpecName: "config-data") pod "6d5f37fe-0099-471b-9192-5f52735977b1" (UID: "6d5f37fe-0099-471b-9192-5f52735977b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.700498 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.700526 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp7zp\" (UniqueName: \"kubernetes.io/projected/6d5f37fe-0099-471b-9192-5f52735977b1-kube-api-access-mp7zp\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.700553 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5f37fe-0099-471b-9192-5f52735977b1-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.707246 4699 generic.go:334] "Generic (PLEG): container finished" podID="6d5f37fe-0099-471b-9192-5f52735977b1" containerID="9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7" exitCode=137 Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.707388 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d5f37fe-0099-471b-9192-5f52735977b1","Type":"ContainerDied","Data":"9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7"} Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.707429 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.707484 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d5f37fe-0099-471b-9192-5f52735977b1","Type":"ContainerDied","Data":"891a5f2f8e4df93b7d5e317f0bde0ca23ec3dcb73c4f3ad638024da213a38a6c"} Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.707517 4699 scope.go:117] "RemoveContainer" containerID="9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.741596 4699 scope.go:117] "RemoveContainer" containerID="9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7" Feb 26 11:34:11 crc kubenswrapper[4699]: E0226 11:34:10.743105 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7\": container with ID starting with 9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7 not found: ID does not exist" containerID="9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.743183 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7"} err="failed to get container status \"9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7\": rpc error: code = NotFound desc = could not find container \"9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7\": container with ID starting with 9a9565174025b2a66d72042b489f69a27808267d25f5c7efcc2fce77583169c7 not found: ID does not exist" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.754604 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.773617 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.787601 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:11 crc kubenswrapper[4699]: E0226 11:34:10.788080 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5f37fe-0099-471b-9192-5f52735977b1" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.788096 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5f37fe-0099-471b-9192-5f52735977b1" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 11:34:11 crc kubenswrapper[4699]: E0226 11:34:10.788105 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63cbb99-64c1-46fe-99eb-0d06cc310cba" containerName="oc" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.788126 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63cbb99-64c1-46fe-99eb-0d06cc310cba" containerName="oc" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.788328 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="a63cbb99-64c1-46fe-99eb-0d06cc310cba" containerName="oc" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.788348 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5f37fe-0099-471b-9192-5f52735977b1" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.789064 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.791270 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.798458 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.798737 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.799418 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.904257 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.904617 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.904698 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.904898 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26vgd\" (UniqueName: \"kubernetes.io/projected/8bb28763-ceae-456c-a0d6-5df33b478106-kube-api-access-26vgd\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:10.904959 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.007018 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.007084 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.007166 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26vgd\" (UniqueName: \"kubernetes.io/projected/8bb28763-ceae-456c-a0d6-5df33b478106-kube-api-access-26vgd\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.007190 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.007236 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.011246 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.011262 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.011808 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.013270 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb28763-ceae-456c-a0d6-5df33b478106-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.023902 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26vgd\" (UniqueName: \"kubernetes.io/projected/8bb28763-ceae-456c-a0d6-5df33b478106-kube-api-access-26vgd\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bb28763-ceae-456c-a0d6-5df33b478106\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.123985 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.585102 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.585399 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.585441 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.586157 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e281597aa593fa5c9ddd67a617de4ed4d3363a8c5b9ebcaaf78cd70cd013eef6"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.586213 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://e281597aa593fa5c9ddd67a617de4ed4d3363a8c5b9ebcaaf78cd70cd013eef6" gracePeriod=600 Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.718590 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="e281597aa593fa5c9ddd67a617de4ed4d3363a8c5b9ebcaaf78cd70cd013eef6" exitCode=0 Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.718637 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"e281597aa593fa5c9ddd67a617de4ed4d3363a8c5b9ebcaaf78cd70cd013eef6"} Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.718681 4699 scope.go:117] "RemoveContainer" containerID="2c2d25c558a927e58d9962b6f55de97dac3f222cb5bc89a35791fca832759b03" Feb 26 11:34:11 crc kubenswrapper[4699]: I0226 11:34:11.908551 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:12 crc kubenswrapper[4699]: I0226 11:34:12.273632 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5f37fe-0099-471b-9192-5f52735977b1" path="/var/lib/kubelet/pods/6d5f37fe-0099-471b-9192-5f52735977b1/volumes" Feb 26 11:34:12 crc kubenswrapper[4699]: I0226 11:34:12.730386 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99"} Feb 26 11:34:12 crc kubenswrapper[4699]: I0226 11:34:12.739357 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8bb28763-ceae-456c-a0d6-5df33b478106","Type":"ContainerStarted","Data":"4f166a9252cb921a215a88fe04068a2f30d2e2ae3cf00bbac0de70d3ed780392"} Feb 26 11:34:12 crc kubenswrapper[4699]: I0226 11:34:12.739418 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8bb28763-ceae-456c-a0d6-5df33b478106","Type":"ContainerStarted","Data":"0fe313e864cfef4cdb1bde0ae46f205943a65667d28cb0c5d56d3359e063c281"} Feb 26 11:34:12 crc kubenswrapper[4699]: I0226 11:34:12.785223 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.785193265 podStartE2EDuration="2.785193265s" podCreationTimestamp="2026-02-26 11:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:12.769842959 +0000 UTC m=+1398.580669393" watchObservedRunningTime="2026-02-26 11:34:12.785193265 +0000 UTC m=+1398.596019709" Feb 26 11:34:14 crc kubenswrapper[4699]: I0226 11:34:14.827847 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4699]: I0226 11:34:14.829260 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4699]: I0226 11:34:14.829763 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4699]: I0226 11:34:14.829789 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4699]: I0226 11:34:14.831921 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4699]: I0226 11:34:14.832791 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.029160 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-n24ct"] Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.033226 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.041024 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-n24ct"] Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.191496 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.191554 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr9gh\" (UniqueName: \"kubernetes.io/projected/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-kube-api-access-mr9gh\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.191933 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-config\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.192279 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.192358 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.193389 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.296305 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.296404 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.296483 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.296530 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9gh\" (UniqueName: \"kubernetes.io/projected/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-kube-api-access-mr9gh\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.296678 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-config\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.296835 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.297425 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.298040 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.298176 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.298208 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-config\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.298386 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.328920 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9gh\" (UniqueName: \"kubernetes.io/projected/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-kube-api-access-mr9gh\") pod \"dnsmasq-dns-5c7b6c5df9-n24ct\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.352072 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:15 crc kubenswrapper[4699]: I0226 11:34:15.882840 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-n24ct"] Feb 26 11:34:16 crc kubenswrapper[4699]: I0226 11:34:16.124778 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:16 crc kubenswrapper[4699]: I0226 11:34:16.779350 4699 generic.go:334] "Generic (PLEG): container finished" podID="a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" containerID="ac279ccad47adb2f6ab2c9bfda803625849869922644f32045786543361b143f" exitCode=0 Feb 26 11:34:16 crc kubenswrapper[4699]: I0226 11:34:16.779416 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" event={"ID":"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8","Type":"ContainerDied","Data":"ac279ccad47adb2f6ab2c9bfda803625849869922644f32045786543361b143f"} Feb 26 11:34:16 crc kubenswrapper[4699]: I0226 11:34:16.779761 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" event={"ID":"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8","Type":"ContainerStarted","Data":"d1d082240eaff72440b2e6ab6682cc7abdf39c898255b3c76048247bf61866be"} Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.035917 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.036739 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="sg-core" containerID="cri-o://742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b" gracePeriod=30 Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.036776 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="proxy-httpd" containerID="cri-o://7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569" gracePeriod=30 Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.036739 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="ceilometer-notification-agent" containerID="cri-o://f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5" gracePeriod=30 Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.037330 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="ceilometer-central-agent" containerID="cri-o://370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3" gracePeriod=30 Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.048003 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.209:3000/\": EOF" Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.553402 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.794420 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" event={"ID":"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8","Type":"ContainerStarted","Data":"6eaae1ee8cf33fbf9c5f7338398f314d84ab95982df8c9ecfdd230c190623ca8"} Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.794568 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.802759 4699 generic.go:334] "Generic (PLEG): container finished" podID="48cbc02a-15d3-4ae1-852f-24658804939b" containerID="7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569" exitCode=0 Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.802927 4699 generic.go:334] "Generic (PLEG): container finished" podID="48cbc02a-15d3-4ae1-852f-24658804939b" containerID="742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b" exitCode=2 Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.802983 4699 generic.go:334] "Generic (PLEG): container finished" podID="48cbc02a-15d3-4ae1-852f-24658804939b" containerID="370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3" exitCode=0 Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.802866 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cbc02a-15d3-4ae1-852f-24658804939b","Type":"ContainerDied","Data":"7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569"} Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.803076 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cbc02a-15d3-4ae1-852f-24658804939b","Type":"ContainerDied","Data":"742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b"} Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.803092 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cbc02a-15d3-4ae1-852f-24658804939b","Type":"ContainerDied","Data":"370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3"} Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.803336 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerName="nova-api-log" containerID="cri-o://5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd" gracePeriod=30 Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.803382 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerName="nova-api-api" containerID="cri-o://7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4" gracePeriod=30 Feb 26 11:34:17 crc kubenswrapper[4699]: I0226 11:34:17.829509 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" podStartSLOduration=2.829486137 podStartE2EDuration="2.829486137s" podCreationTimestamp="2026-02-26 11:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:17.817420286 +0000 UTC m=+1403.628246720" watchObservedRunningTime="2026-02-26 11:34:17.829486137 +0000 UTC m=+1403.640312571" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.736670 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.770792 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-ceilometer-tls-certs\") pod \"48cbc02a-15d3-4ae1-852f-24658804939b\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.770868 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-config-data\") pod \"48cbc02a-15d3-4ae1-852f-24658804939b\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.770931 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl5n4\" (UniqueName: \"kubernetes.io/projected/48cbc02a-15d3-4ae1-852f-24658804939b-kube-api-access-jl5n4\") pod \"48cbc02a-15d3-4ae1-852f-24658804939b\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.770963 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-sg-core-conf-yaml\") pod \"48cbc02a-15d3-4ae1-852f-24658804939b\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.771007 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-run-httpd\") pod \"48cbc02a-15d3-4ae1-852f-24658804939b\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.771064 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-combined-ca-bundle\") pod \"48cbc02a-15d3-4ae1-852f-24658804939b\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.771091 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-scripts\") pod \"48cbc02a-15d3-4ae1-852f-24658804939b\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.771133 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-log-httpd\") pod \"48cbc02a-15d3-4ae1-852f-24658804939b\" (UID: \"48cbc02a-15d3-4ae1-852f-24658804939b\") " Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.772470 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "48cbc02a-15d3-4ae1-852f-24658804939b" (UID: "48cbc02a-15d3-4ae1-852f-24658804939b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.786676 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "48cbc02a-15d3-4ae1-852f-24658804939b" (UID: "48cbc02a-15d3-4ae1-852f-24658804939b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.800339 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-scripts" (OuterVolumeSpecName: "scripts") pod "48cbc02a-15d3-4ae1-852f-24658804939b" (UID: "48cbc02a-15d3-4ae1-852f-24658804939b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.822486 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48cbc02a-15d3-4ae1-852f-24658804939b-kube-api-access-jl5n4" (OuterVolumeSpecName: "kube-api-access-jl5n4") pod "48cbc02a-15d3-4ae1-852f-24658804939b" (UID: "48cbc02a-15d3-4ae1-852f-24658804939b"). InnerVolumeSpecName "kube-api-access-jl5n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.873620 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl5n4\" (UniqueName: \"kubernetes.io/projected/48cbc02a-15d3-4ae1-852f-24658804939b-kube-api-access-jl5n4\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.873647 4699 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.873656 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.873664 4699 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cbc02a-15d3-4ae1-852f-24658804939b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.887268 4699 generic.go:334] "Generic (PLEG): container finished" podID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerID="5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd" exitCode=143 Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.887454 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5783da86-2f9d-42da-ae1e-7df1f4190892","Type":"ContainerDied","Data":"5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd"} Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.919331 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "48cbc02a-15d3-4ae1-852f-24658804939b" (UID: "48cbc02a-15d3-4ae1-852f-24658804939b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.937573 4699 generic.go:334] "Generic (PLEG): container finished" podID="48cbc02a-15d3-4ae1-852f-24658804939b" containerID="f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5" exitCode=0 Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.938731 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.939297 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cbc02a-15d3-4ae1-852f-24658804939b","Type":"ContainerDied","Data":"f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5"} Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.939323 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cbc02a-15d3-4ae1-852f-24658804939b","Type":"ContainerDied","Data":"68d0a1dbee3e4e680859b8d4e019b458f07850a3366f9a21d8ad3957b8f3d34a"} Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.939340 4699 scope.go:117] "RemoveContainer" containerID="7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.975734 4699 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:18 crc kubenswrapper[4699]: I0226 11:34:18.990997 4699 scope.go:117] "RemoveContainer" containerID="742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.012265 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "48cbc02a-15d3-4ae1-852f-24658804939b" (UID: "48cbc02a-15d3-4ae1-852f-24658804939b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.020245 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48cbc02a-15d3-4ae1-852f-24658804939b" (UID: "48cbc02a-15d3-4ae1-852f-24658804939b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.024467 4699 scope.go:117] "RemoveContainer" containerID="f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.049951 4699 scope.go:117] "RemoveContainer" containerID="370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.076379 4699 scope.go:117] "RemoveContainer" containerID="7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.077730 4699 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.077748 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:19 crc kubenswrapper[4699]: E0226 11:34:19.078702 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569\": container with ID starting with 7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569 not found: ID does not exist" containerID="7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.078771 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569"} err="failed to get container status \"7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569\": rpc error: code = NotFound desc = could not find container \"7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569\": container with ID starting with 7497611bc316a363e6faede71364bc2badf1e82a9e7163a9b0d3885e64e69569 not found: ID does not exist" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.078835 4699 scope.go:117] "RemoveContainer" containerID="742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b" Feb 26 11:34:19 crc kubenswrapper[4699]: E0226 11:34:19.079331 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b\": container with ID starting with 742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b not found: ID does not exist" containerID="742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.079366 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b"} err="failed to get container status \"742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b\": rpc error: code = NotFound desc = could not find container \"742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b\": container with ID starting with 742583c4f8071616ca7933fdc484002228afd074d9f17ea694ae2c324eb67a8b not found: ID does not exist" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.079394 4699 scope.go:117] "RemoveContainer" containerID="f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5" Feb 26 11:34:19 crc kubenswrapper[4699]: E0226 11:34:19.081950 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5\": container with ID starting with f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5 not found: ID does not exist" containerID="f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.081978 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5"} err="failed to get container status \"f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5\": rpc error: code = NotFound desc = could not find container \"f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5\": container with ID starting with f9988ddb231f2367ed1aeb8133431d53f6c03ac1bb83b203e17a77ec850b07c5 not found: ID does not exist" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.081995 4699 scope.go:117] "RemoveContainer" containerID="370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3" Feb 26 11:34:19 crc kubenswrapper[4699]: E0226 11:34:19.082244 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3\": container with ID starting with 370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3 not found: ID does not exist" containerID="370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.082274 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3"} err="failed to get container status \"370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3\": rpc error: code = NotFound desc = could not find container \"370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3\": container with ID starting with 370c564c94459e8d8c4a181df1cfad272d11acf7f598532418862df6abf6a7c3 not found: ID does not exist" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.097662 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-config-data" (OuterVolumeSpecName: "config-data") pod "48cbc02a-15d3-4ae1-852f-24658804939b" (UID: "48cbc02a-15d3-4ae1-852f-24658804939b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.179448 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48cbc02a-15d3-4ae1-852f-24658804939b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.368989 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.388306 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.414270 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:19 crc kubenswrapper[4699]: E0226 11:34:19.414848 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="sg-core" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.415504 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="sg-core" Feb 26 11:34:19 crc kubenswrapper[4699]: E0226 11:34:19.415760 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="ceilometer-central-agent" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.415822 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="ceilometer-central-agent" Feb 26 11:34:19 crc kubenswrapper[4699]: E0226 11:34:19.416063 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="proxy-httpd" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.416202 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="proxy-httpd" Feb 26 11:34:19 crc kubenswrapper[4699]: E0226 11:34:19.416270 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="ceilometer-notification-agent" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.416317 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="ceilometer-notification-agent" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.416682 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="proxy-httpd" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.416854 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="ceilometer-central-agent" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.416913 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="ceilometer-notification-agent" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.417031 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" containerName="sg-core" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.420030 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.423079 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.423454 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.423430 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.425907 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.433770 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:19 crc kubenswrapper[4699]: E0226 11:34:19.454461 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-7trcj log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-7trcj log-httpd run-httpd scripts sg-core-conf-yaml]: context canceled" pod="openstack/ceilometer-0" podUID="cfc43627-a5fc-40fe-b7a4-6d04e80481dd" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.601510 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.601565 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-scripts\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.601655 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7trcj\" (UniqueName: \"kubernetes.io/projected/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-kube-api-access-7trcj\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.601928 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-run-httpd\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.602032 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.602098 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.602159 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-config-data\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.602269 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-log-httpd\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.703791 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-log-httpd\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.703885 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.703917 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-scripts\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.703961 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7trcj\" (UniqueName: \"kubernetes.io/projected/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-kube-api-access-7trcj\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.704011 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-run-httpd\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.704035 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.704090 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.704387 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-log-httpd\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.704534 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-run-httpd\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.704767 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-config-data\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.716282 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.716604 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-config-data\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.717939 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.718612 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-scripts\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.723347 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7trcj\" (UniqueName: \"kubernetes.io/projected/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-kube-api-access-7trcj\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.730454 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.946836 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:19 crc kubenswrapper[4699]: I0226 11:34:19.959186 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.111606 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-sg-core-conf-yaml\") pod \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.111991 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-ceilometer-tls-certs\") pod \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.112078 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-combined-ca-bundle\") pod \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.112141 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-scripts\") pod \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.112180 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-log-httpd\") pod \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.112231 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7trcj\" (UniqueName: \"kubernetes.io/projected/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-kube-api-access-7trcj\") pod \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.112281 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-config-data\") pod \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.112329 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-run-httpd\") pod \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\" (UID: \"cfc43627-a5fc-40fe-b7a4-6d04e80481dd\") " Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.112423 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cfc43627-a5fc-40fe-b7a4-6d04e80481dd" (UID: "cfc43627-a5fc-40fe-b7a4-6d04e80481dd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.112745 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cfc43627-a5fc-40fe-b7a4-6d04e80481dd" (UID: "cfc43627-a5fc-40fe-b7a4-6d04e80481dd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.113177 4699 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.113200 4699 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.117013 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-scripts" (OuterVolumeSpecName: "scripts") pod "cfc43627-a5fc-40fe-b7a4-6d04e80481dd" (UID: "cfc43627-a5fc-40fe-b7a4-6d04e80481dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.117724 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "cfc43627-a5fc-40fe-b7a4-6d04e80481dd" (UID: "cfc43627-a5fc-40fe-b7a4-6d04e80481dd"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.117752 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cfc43627-a5fc-40fe-b7a4-6d04e80481dd" (UID: "cfc43627-a5fc-40fe-b7a4-6d04e80481dd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.117738 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-kube-api-access-7trcj" (OuterVolumeSpecName: "kube-api-access-7trcj") pod "cfc43627-a5fc-40fe-b7a4-6d04e80481dd" (UID: "cfc43627-a5fc-40fe-b7a4-6d04e80481dd"). InnerVolumeSpecName "kube-api-access-7trcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.118006 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-config-data" (OuterVolumeSpecName: "config-data") pod "cfc43627-a5fc-40fe-b7a4-6d04e80481dd" (UID: "cfc43627-a5fc-40fe-b7a4-6d04e80481dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.118417 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfc43627-a5fc-40fe-b7a4-6d04e80481dd" (UID: "cfc43627-a5fc-40fe-b7a4-6d04e80481dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.215036 4699 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.215103 4699 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.215155 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.215169 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.215181 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7trcj\" (UniqueName: \"kubernetes.io/projected/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-kube-api-access-7trcj\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.215196 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc43627-a5fc-40fe-b7a4-6d04e80481dd-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.271297 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48cbc02a-15d3-4ae1-852f-24658804939b" path="/var/lib/kubelet/pods/48cbc02a-15d3-4ae1-852f-24658804939b/volumes" Feb 26 11:34:20 crc kubenswrapper[4699]: I0226 11:34:20.956380 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.126388 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.127207 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.145488 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.157526 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.157573 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.160043 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.162042 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.162240 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.162607 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.173708 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.342475 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-scripts\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.342532 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.342698 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09a6eb79-27c3-465b-adae-b32d96c56b65-run-httpd\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.342939 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.343014 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-config-data\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.343071 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x27wk\" (UniqueName: \"kubernetes.io/projected/09a6eb79-27c3-465b-adae-b32d96c56b65-kube-api-access-x27wk\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.343096 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09a6eb79-27c3-465b-adae-b32d96c56b65-log-httpd\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.343205 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.441058 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.444804 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.444871 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-config-data\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.444912 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x27wk\" (UniqueName: \"kubernetes.io/projected/09a6eb79-27c3-465b-adae-b32d96c56b65-kube-api-access-x27wk\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.445613 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09a6eb79-27c3-465b-adae-b32d96c56b65-log-httpd\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.445685 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.445742 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-scripts\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.445777 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.445850 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09a6eb79-27c3-465b-adae-b32d96c56b65-run-httpd\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.445967 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09a6eb79-27c3-465b-adae-b32d96c56b65-log-httpd\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.446434 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09a6eb79-27c3-465b-adae-b32d96c56b65-run-httpd\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.450091 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-scripts\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.450497 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.454050 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-config-data\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.458011 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.458050 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a6eb79-27c3-465b-adae-b32d96c56b65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.469745 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x27wk\" (UniqueName: \"kubernetes.io/projected/09a6eb79-27c3-465b-adae-b32d96c56b65-kube-api-access-x27wk\") pod \"ceilometer-0\" (UID: \"09a6eb79-27c3-465b-adae-b32d96c56b65\") " pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.480703 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.547541 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvq5n\" (UniqueName: \"kubernetes.io/projected/5783da86-2f9d-42da-ae1e-7df1f4190892-kube-api-access-wvq5n\") pod \"5783da86-2f9d-42da-ae1e-7df1f4190892\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.547638 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5783da86-2f9d-42da-ae1e-7df1f4190892-logs\") pod \"5783da86-2f9d-42da-ae1e-7df1f4190892\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.547688 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-config-data\") pod \"5783da86-2f9d-42da-ae1e-7df1f4190892\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.547717 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-combined-ca-bundle\") pod \"5783da86-2f9d-42da-ae1e-7df1f4190892\" (UID: \"5783da86-2f9d-42da-ae1e-7df1f4190892\") " Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.548342 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5783da86-2f9d-42da-ae1e-7df1f4190892-logs" (OuterVolumeSpecName: "logs") pod "5783da86-2f9d-42da-ae1e-7df1f4190892" (UID: "5783da86-2f9d-42da-ae1e-7df1f4190892"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.558247 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5783da86-2f9d-42da-ae1e-7df1f4190892-kube-api-access-wvq5n" (OuterVolumeSpecName: "kube-api-access-wvq5n") pod "5783da86-2f9d-42da-ae1e-7df1f4190892" (UID: "5783da86-2f9d-42da-ae1e-7df1f4190892"). InnerVolumeSpecName "kube-api-access-wvq5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.582584 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-config-data" (OuterVolumeSpecName: "config-data") pod "5783da86-2f9d-42da-ae1e-7df1f4190892" (UID: "5783da86-2f9d-42da-ae1e-7df1f4190892"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.604007 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5783da86-2f9d-42da-ae1e-7df1f4190892" (UID: "5783da86-2f9d-42da-ae1e-7df1f4190892"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.650263 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvq5n\" (UniqueName: \"kubernetes.io/projected/5783da86-2f9d-42da-ae1e-7df1f4190892-kube-api-access-wvq5n\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.650299 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5783da86-2f9d-42da-ae1e-7df1f4190892-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.650313 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.650325 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5783da86-2f9d-42da-ae1e-7df1f4190892-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.968290 4699 generic.go:334] "Generic (PLEG): container finished" podID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerID="7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4" exitCode=0 Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.968393 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.968420 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5783da86-2f9d-42da-ae1e-7df1f4190892","Type":"ContainerDied","Data":"7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4"} Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.969385 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5783da86-2f9d-42da-ae1e-7df1f4190892","Type":"ContainerDied","Data":"643f5d9494a203696bd97ca6ac87808480d0591a9fdea1382c0442db74ffeabf"} Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.969434 4699 scope.go:117] "RemoveContainer" containerID="7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4" Feb 26 11:34:21 crc kubenswrapper[4699]: I0226 11:34:21.990104 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.004811 4699 scope.go:117] "RemoveContainer" containerID="5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.041641 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.124140 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.153407 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:22 crc kubenswrapper[4699]: E0226 11:34:22.156541 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerName="nova-api-log" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.156578 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerName="nova-api-log" Feb 26 11:34:22 crc kubenswrapper[4699]: E0226 11:34:22.156625 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerName="nova-api-api" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.156635 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerName="nova-api-api" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.156858 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerName="nova-api-log" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.156881 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="5783da86-2f9d-42da-ae1e-7df1f4190892" containerName="nova-api-api" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.158567 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.159613 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.161934 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.162182 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.162340 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.163946 4699 scope.go:117] "RemoveContainer" containerID="7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4" Feb 26 11:34:22 crc kubenswrapper[4699]: E0226 11:34:22.164661 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4\": container with ID starting with 7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4 not found: ID does not exist" containerID="7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.164703 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4"} err="failed to get container status \"7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4\": rpc error: code = NotFound desc = could not find container \"7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4\": container with ID starting with 7419fb8defa328e64d4ff51073bc24c52256ef3871192c86a3186a5335afa3a4 not found: ID does not exist" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.164749 4699 scope.go:117] "RemoveContainer" containerID="5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd" Feb 26 11:34:22 crc kubenswrapper[4699]: E0226 11:34:22.165534 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd\": container with ID starting with 5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd not found: ID does not exist" containerID="5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.165580 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd"} err="failed to get container status \"5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd\": rpc error: code = NotFound desc = could not find container \"5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd\": container with ID starting with 5da9136f3d1044f7ed5915a0bc35b164a1ef4c4d70bffc4d5337bc060879e9cd not found: ID does not exist" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.166560 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.271510 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5783da86-2f9d-42da-ae1e-7df1f4190892" path="/var/lib/kubelet/pods/5783da86-2f9d-42da-ae1e-7df1f4190892/volumes" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.272262 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc43627-a5fc-40fe-b7a4-6d04e80481dd" path="/var/lib/kubelet/pods/cfc43627-a5fc-40fe-b7a4-6d04e80481dd/volumes" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.272672 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-77cbz"] Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.273849 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.276187 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-77cbz"] Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.276371 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.276680 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.280388 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/325cc77b-b7fa-435b-b6fe-332ee76d0feb-logs\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.280431 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-config-data\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.280492 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.280546 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.280566 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdwcz\" (UniqueName: \"kubernetes.io/projected/325cc77b-b7fa-435b-b6fe-332ee76d0feb-kube-api-access-qdwcz\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.280624 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-public-tls-certs\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.382151 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-public-tls-certs\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.382237 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/325cc77b-b7fa-435b-b6fe-332ee76d0feb-logs\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.382268 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-config-data\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.382315 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-config-data\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.382352 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.382381 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-scripts\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.382443 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.382462 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.382480 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdwcz\" (UniqueName: \"kubernetes.io/projected/325cc77b-b7fa-435b-b6fe-332ee76d0feb-kube-api-access-qdwcz\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.382529 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5hfk\" (UniqueName: \"kubernetes.io/projected/462a2449-2712-4bb7-9ec9-6e09a1800361-kube-api-access-r5hfk\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.383781 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/325cc77b-b7fa-435b-b6fe-332ee76d0feb-logs\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.388279 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.389411 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-config-data\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.398707 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.403462 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-public-tls-certs\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.404391 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdwcz\" (UniqueName: \"kubernetes.io/projected/325cc77b-b7fa-435b-b6fe-332ee76d0feb-kube-api-access-qdwcz\") pod \"nova-api-0\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.483882 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-scripts\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.484187 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.484455 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.485171 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5hfk\" (UniqueName: \"kubernetes.io/projected/462a2449-2712-4bb7-9ec9-6e09a1800361-kube-api-access-r5hfk\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.485860 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-config-data\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.488895 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-scripts\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.489529 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-config-data\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.493326 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.509520 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5hfk\" (UniqueName: \"kubernetes.io/projected/462a2449-2712-4bb7-9ec9-6e09a1800361-kube-api-access-r5hfk\") pod \"nova-cell1-cell-mapping-77cbz\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.595930 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.940518 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:22 crc kubenswrapper[4699]: W0226 11:34:22.947468 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod325cc77b_b7fa_435b_b6fe_332ee76d0feb.slice/crio-f0fb906b3d6a1aee565ebdc2050e6c35e9bcecf22462ca9f4656f519d7f3b17f WatchSource:0}: Error finding container f0fb906b3d6a1aee565ebdc2050e6c35e9bcecf22462ca9f4656f519d7f3b17f: Status 404 returned error can't find the container with id f0fb906b3d6a1aee565ebdc2050e6c35e9bcecf22462ca9f4656f519d7f3b17f Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.991947 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09a6eb79-27c3-465b-adae-b32d96c56b65","Type":"ContainerStarted","Data":"b953548672b22e05f3c8a2d7c2f458bb37c1ef0d94f774cc000e8124fcf46ff2"} Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.993227 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09a6eb79-27c3-465b-adae-b32d96c56b65","Type":"ContainerStarted","Data":"38c673a748b8c890c35cde20c5dedb4dde0c5601fed372bbd77f1da4ff9c4fc4"} Feb 26 11:34:22 crc kubenswrapper[4699]: I0226 11:34:22.993747 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"325cc77b-b7fa-435b-b6fe-332ee76d0feb","Type":"ContainerStarted","Data":"f0fb906b3d6a1aee565ebdc2050e6c35e9bcecf22462ca9f4656f519d7f3b17f"} Feb 26 11:34:23 crc kubenswrapper[4699]: I0226 11:34:23.070274 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-77cbz"] Feb 26 11:34:23 crc kubenswrapper[4699]: W0226 11:34:23.075882 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod462a2449_2712_4bb7_9ec9_6e09a1800361.slice/crio-d6ac1061cf5430e06667789d441516ddac57163a4567ce58f4cc4e16b80bf0eb WatchSource:0}: Error finding container d6ac1061cf5430e06667789d441516ddac57163a4567ce58f4cc4e16b80bf0eb: Status 404 returned error can't find the container with id d6ac1061cf5430e06667789d441516ddac57163a4567ce58f4cc4e16b80bf0eb Feb 26 11:34:24 crc kubenswrapper[4699]: I0226 11:34:24.009977 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77cbz" event={"ID":"462a2449-2712-4bb7-9ec9-6e09a1800361","Type":"ContainerStarted","Data":"c08f0ffa53e77347fd581c677192ce80109e73083d1caad9bb7251a920a34172"} Feb 26 11:34:24 crc kubenswrapper[4699]: I0226 11:34:24.010301 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77cbz" event={"ID":"462a2449-2712-4bb7-9ec9-6e09a1800361","Type":"ContainerStarted","Data":"d6ac1061cf5430e06667789d441516ddac57163a4567ce58f4cc4e16b80bf0eb"} Feb 26 11:34:24 crc kubenswrapper[4699]: I0226 11:34:24.013642 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09a6eb79-27c3-465b-adae-b32d96c56b65","Type":"ContainerStarted","Data":"01eea719091c456f24754ccdf719523f2d585f9f472b3ec03f08f869d78d48a9"} Feb 26 11:34:24 crc kubenswrapper[4699]: I0226 11:34:24.015931 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"325cc77b-b7fa-435b-b6fe-332ee76d0feb","Type":"ContainerStarted","Data":"1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02"} Feb 26 11:34:24 crc kubenswrapper[4699]: I0226 11:34:24.015975 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"325cc77b-b7fa-435b-b6fe-332ee76d0feb","Type":"ContainerStarted","Data":"07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705"} Feb 26 11:34:24 crc kubenswrapper[4699]: I0226 11:34:24.028834 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-77cbz" podStartSLOduration=2.02881789 podStartE2EDuration="2.02881789s" podCreationTimestamp="2026-02-26 11:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:24.024710251 +0000 UTC m=+1409.835536685" watchObservedRunningTime="2026-02-26 11:34:24.02881789 +0000 UTC m=+1409.839644324" Feb 26 11:34:24 crc kubenswrapper[4699]: I0226 11:34:24.054369 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.054344422 podStartE2EDuration="2.054344422s" podCreationTimestamp="2026-02-26 11:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:24.050180921 +0000 UTC m=+1409.861007375" watchObservedRunningTime="2026-02-26 11:34:24.054344422 +0000 UTC m=+1409.865170856" Feb 26 11:34:25 crc kubenswrapper[4699]: I0226 11:34:25.032084 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09a6eb79-27c3-465b-adae-b32d96c56b65","Type":"ContainerStarted","Data":"04e3bf77d1c5035c9875eed43705f82d5f25f919def391b6d1b9f1e0eccc3eed"} Feb 26 11:34:25 crc kubenswrapper[4699]: I0226 11:34:25.354317 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:34:25 crc kubenswrapper[4699]: I0226 11:34:25.456155 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-5jmd5"] Feb 26 11:34:25 crc kubenswrapper[4699]: I0226 11:34:25.456964 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" podUID="90cd25a3-8ac5-49d2-b3a1-79c773a0b394" containerName="dnsmasq-dns" containerID="cri-o://50c7ddb03e58cd9791ab6f41d1755213bce0ea0826aec0f5b6934548dfaf9782" gracePeriod=10 Feb 26 11:34:25 crc kubenswrapper[4699]: I0226 11:34:25.616221 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" podUID="90cd25a3-8ac5-49d2-b3a1-79c773a0b394" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.201:5353: connect: connection refused" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.041309 4699 generic.go:334] "Generic (PLEG): container finished" podID="90cd25a3-8ac5-49d2-b3a1-79c773a0b394" containerID="50c7ddb03e58cd9791ab6f41d1755213bce0ea0826aec0f5b6934548dfaf9782" exitCode=0 Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.041519 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" event={"ID":"90cd25a3-8ac5-49d2-b3a1-79c773a0b394","Type":"ContainerDied","Data":"50c7ddb03e58cd9791ab6f41d1755213bce0ea0826aec0f5b6934548dfaf9782"} Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.041649 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" event={"ID":"90cd25a3-8ac5-49d2-b3a1-79c773a0b394","Type":"ContainerDied","Data":"e1b123469f14c639c8594d09af4903ba398bf0ca95a50aeadc71f0627b95230b"} Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.041670 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1b123469f14c639c8594d09af4903ba398bf0ca95a50aeadc71f0627b95230b" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.062340 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.162230 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-svc\") pod \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.162299 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-nb\") pod \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.162327 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-sb\") pod \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.162388 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gpwp\" (UniqueName: \"kubernetes.io/projected/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-kube-api-access-6gpwp\") pod \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.162524 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-swift-storage-0\") pod \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.162607 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-config\") pod \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\" (UID: \"90cd25a3-8ac5-49d2-b3a1-79c773a0b394\") " Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.184562 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-kube-api-access-6gpwp" (OuterVolumeSpecName: "kube-api-access-6gpwp") pod "90cd25a3-8ac5-49d2-b3a1-79c773a0b394" (UID: "90cd25a3-8ac5-49d2-b3a1-79c773a0b394"). InnerVolumeSpecName "kube-api-access-6gpwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.208441 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "90cd25a3-8ac5-49d2-b3a1-79c773a0b394" (UID: "90cd25a3-8ac5-49d2-b3a1-79c773a0b394"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.215655 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "90cd25a3-8ac5-49d2-b3a1-79c773a0b394" (UID: "90cd25a3-8ac5-49d2-b3a1-79c773a0b394"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.215674 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90cd25a3-8ac5-49d2-b3a1-79c773a0b394" (UID: "90cd25a3-8ac5-49d2-b3a1-79c773a0b394"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.220103 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-config" (OuterVolumeSpecName: "config") pod "90cd25a3-8ac5-49d2-b3a1-79c773a0b394" (UID: "90cd25a3-8ac5-49d2-b3a1-79c773a0b394"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.224998 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "90cd25a3-8ac5-49d2-b3a1-79c773a0b394" (UID: "90cd25a3-8ac5-49d2-b3a1-79c773a0b394"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.264628 4699 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.264669 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.264679 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.264689 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.264698 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:26 crc kubenswrapper[4699]: I0226 11:34:26.264706 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gpwp\" (UniqueName: \"kubernetes.io/projected/90cd25a3-8ac5-49d2-b3a1-79c773a0b394-kube-api-access-6gpwp\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:27 crc kubenswrapper[4699]: I0226 11:34:27.049532 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-5jmd5" Feb 26 11:34:27 crc kubenswrapper[4699]: I0226 11:34:27.077209 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-5jmd5"] Feb 26 11:34:27 crc kubenswrapper[4699]: I0226 11:34:27.085643 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-5jmd5"] Feb 26 11:34:27 crc kubenswrapper[4699]: I0226 11:34:27.714591 4699 scope.go:117] "RemoveContainer" containerID="1a0ef1ef6d99c76627fc03dba6d4f740ea96e617f11be2b18231f70b40dd8703" Feb 26 11:34:28 crc kubenswrapper[4699]: I0226 11:34:28.272952 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90cd25a3-8ac5-49d2-b3a1-79c773a0b394" path="/var/lib/kubelet/pods/90cd25a3-8ac5-49d2-b3a1-79c773a0b394/volumes" Feb 26 11:34:29 crc kubenswrapper[4699]: I0226 11:34:29.070000 4699 generic.go:334] "Generic (PLEG): container finished" podID="462a2449-2712-4bb7-9ec9-6e09a1800361" containerID="c08f0ffa53e77347fd581c677192ce80109e73083d1caad9bb7251a920a34172" exitCode=0 Feb 26 11:34:29 crc kubenswrapper[4699]: I0226 11:34:29.070103 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77cbz" event={"ID":"462a2449-2712-4bb7-9ec9-6e09a1800361","Type":"ContainerDied","Data":"c08f0ffa53e77347fd581c677192ce80109e73083d1caad9bb7251a920a34172"} Feb 26 11:34:29 crc kubenswrapper[4699]: I0226 11:34:29.074855 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09a6eb79-27c3-465b-adae-b32d96c56b65","Type":"ContainerStarted","Data":"43050544752a7e32adeaf5163ad3ea01f011caaf4c7520d8ab03222a32f920f2"} Feb 26 11:34:29 crc kubenswrapper[4699]: I0226 11:34:29.074971 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:34:29 crc kubenswrapper[4699]: I0226 11:34:29.126970 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.258535211 podStartE2EDuration="8.126942975s" podCreationTimestamp="2026-02-26 11:34:21 +0000 UTC" firstStartedPulling="2026-02-26 11:34:22.109236094 +0000 UTC m=+1407.920062528" lastFinishedPulling="2026-02-26 11:34:27.977643848 +0000 UTC m=+1413.788470292" observedRunningTime="2026-02-26 11:34:29.110171808 +0000 UTC m=+1414.920998252" watchObservedRunningTime="2026-02-26 11:34:29.126942975 +0000 UTC m=+1414.937769419" Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.455792 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.580869 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5hfk\" (UniqueName: \"kubernetes.io/projected/462a2449-2712-4bb7-9ec9-6e09a1800361-kube-api-access-r5hfk\") pod \"462a2449-2712-4bb7-9ec9-6e09a1800361\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.581349 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-scripts\") pod \"462a2449-2712-4bb7-9ec9-6e09a1800361\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.581411 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-combined-ca-bundle\") pod \"462a2449-2712-4bb7-9ec9-6e09a1800361\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.581546 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-config-data\") pod \"462a2449-2712-4bb7-9ec9-6e09a1800361\" (UID: \"462a2449-2712-4bb7-9ec9-6e09a1800361\") " Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.587995 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462a2449-2712-4bb7-9ec9-6e09a1800361-kube-api-access-r5hfk" (OuterVolumeSpecName: "kube-api-access-r5hfk") pod "462a2449-2712-4bb7-9ec9-6e09a1800361" (UID: "462a2449-2712-4bb7-9ec9-6e09a1800361"). InnerVolumeSpecName "kube-api-access-r5hfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.588195 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-scripts" (OuterVolumeSpecName: "scripts") pod "462a2449-2712-4bb7-9ec9-6e09a1800361" (UID: "462a2449-2712-4bb7-9ec9-6e09a1800361"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.612583 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "462a2449-2712-4bb7-9ec9-6e09a1800361" (UID: "462a2449-2712-4bb7-9ec9-6e09a1800361"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.620264 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-config-data" (OuterVolumeSpecName: "config-data") pod "462a2449-2712-4bb7-9ec9-6e09a1800361" (UID: "462a2449-2712-4bb7-9ec9-6e09a1800361"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.683733 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5hfk\" (UniqueName: \"kubernetes.io/projected/462a2449-2712-4bb7-9ec9-6e09a1800361-kube-api-access-r5hfk\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.683771 4699 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.683783 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:30 crc kubenswrapper[4699]: I0226 11:34:30.683795 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462a2449-2712-4bb7-9ec9-6e09a1800361-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:31 crc kubenswrapper[4699]: I0226 11:34:31.094155 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77cbz" event={"ID":"462a2449-2712-4bb7-9ec9-6e09a1800361","Type":"ContainerDied","Data":"d6ac1061cf5430e06667789d441516ddac57163a4567ce58f4cc4e16b80bf0eb"} Feb 26 11:34:31 crc kubenswrapper[4699]: I0226 11:34:31.094195 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ac1061cf5430e06667789d441516ddac57163a4567ce58f4cc4e16b80bf0eb" Feb 26 11:34:31 crc kubenswrapper[4699]: I0226 11:34:31.094265 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77cbz" Feb 26 11:34:31 crc kubenswrapper[4699]: I0226 11:34:31.287537 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:31 crc kubenswrapper[4699]: I0226 11:34:31.287809 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="325cc77b-b7fa-435b-b6fe-332ee76d0feb" containerName="nova-api-log" containerID="cri-o://07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705" gracePeriod=30 Feb 26 11:34:31 crc kubenswrapper[4699]: I0226 11:34:31.287855 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="325cc77b-b7fa-435b-b6fe-332ee76d0feb" containerName="nova-api-api" containerID="cri-o://1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02" gracePeriod=30 Feb 26 11:34:31 crc kubenswrapper[4699]: I0226 11:34:31.298272 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:31 crc kubenswrapper[4699]: I0226 11:34:31.298856 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a8bf50ee-a389-4a35-8899-81d885e1ec38" containerName="nova-scheduler-scheduler" containerID="cri-o://c97d92e712559c7220f14c08b215ac1ea015fa517ad26257d3edff7ee08e2ec0" gracePeriod=30 Feb 26 11:34:31 crc kubenswrapper[4699]: I0226 11:34:31.361051 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:31 crc kubenswrapper[4699]: I0226 11:34:31.361366 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-log" containerID="cri-o://7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa" gracePeriod=30 Feb 26 11:34:31 crc kubenswrapper[4699]: I0226 11:34:31.361479 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-metadata" containerID="cri-o://36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81" gracePeriod=30 Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.028319 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.114532 4699 generic.go:334] "Generic (PLEG): container finished" podID="c847caf4-446a-4738-88a8-26d1628c91f7" containerID="7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa" exitCode=143 Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.114614 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c847caf4-446a-4738-88a8-26d1628c91f7","Type":"ContainerDied","Data":"7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa"} Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.117214 4699 generic.go:334] "Generic (PLEG): container finished" podID="325cc77b-b7fa-435b-b6fe-332ee76d0feb" containerID="1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02" exitCode=0 Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.117242 4699 generic.go:334] "Generic (PLEG): container finished" podID="325cc77b-b7fa-435b-b6fe-332ee76d0feb" containerID="07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705" exitCode=143 Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.117287 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"325cc77b-b7fa-435b-b6fe-332ee76d0feb","Type":"ContainerDied","Data":"1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02"} Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.117306 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"325cc77b-b7fa-435b-b6fe-332ee76d0feb","Type":"ContainerDied","Data":"07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705"} Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.117317 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"325cc77b-b7fa-435b-b6fe-332ee76d0feb","Type":"ContainerDied","Data":"f0fb906b3d6a1aee565ebdc2050e6c35e9bcecf22462ca9f4656f519d7f3b17f"} Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.117332 4699 scope.go:117] "RemoveContainer" containerID="1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.117329 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.121202 4699 generic.go:334] "Generic (PLEG): container finished" podID="a8bf50ee-a389-4a35-8899-81d885e1ec38" containerID="c97d92e712559c7220f14c08b215ac1ea015fa517ad26257d3edff7ee08e2ec0" exitCode=0 Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.121491 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a8bf50ee-a389-4a35-8899-81d885e1ec38","Type":"ContainerDied","Data":"c97d92e712559c7220f14c08b215ac1ea015fa517ad26257d3edff7ee08e2ec0"} Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.141844 4699 scope.go:117] "RemoveContainer" containerID="07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.171672 4699 scope.go:117] "RemoveContainer" containerID="1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02" Feb 26 11:34:32 crc kubenswrapper[4699]: E0226 11:34:32.172335 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02\": container with ID starting with 1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02 not found: ID does not exist" containerID="1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.172384 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02"} err="failed to get container status \"1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02\": rpc error: code = NotFound desc = could not find container \"1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02\": container with ID starting with 1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02 not found: ID does not exist" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.172549 4699 scope.go:117] "RemoveContainer" containerID="07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705" Feb 26 11:34:32 crc kubenswrapper[4699]: E0226 11:34:32.173154 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705\": container with ID starting with 07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705 not found: ID does not exist" containerID="07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.173232 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705"} err="failed to get container status \"07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705\": rpc error: code = NotFound desc = could not find container \"07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705\": container with ID starting with 07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705 not found: ID does not exist" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.173262 4699 scope.go:117] "RemoveContainer" containerID="1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.174136 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02"} err="failed to get container status \"1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02\": rpc error: code = NotFound desc = could not find container \"1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02\": container with ID starting with 1a8f7a599301cb7f75d7e70ef43d139da9f5ff4b21e208c0bbbf4a797499bc02 not found: ID does not exist" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.174185 4699 scope.go:117] "RemoveContainer" containerID="07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.174443 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705"} err="failed to get container status \"07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705\": rpc error: code = NotFound desc = could not find container \"07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705\": container with ID starting with 07f6dfef9a97a1b21b7413336f64271a090f8627e3593a5862c177db3ac2f705 not found: ID does not exist" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.207933 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.214112 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-combined-ca-bundle\") pod \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.214236 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/325cc77b-b7fa-435b-b6fe-332ee76d0feb-logs\") pod \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.214365 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdwcz\" (UniqueName: \"kubernetes.io/projected/325cc77b-b7fa-435b-b6fe-332ee76d0feb-kube-api-access-qdwcz\") pod \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.214411 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-internal-tls-certs\") pod \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.214515 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-public-tls-certs\") pod \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.214602 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-config-data\") pod \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\" (UID: \"325cc77b-b7fa-435b-b6fe-332ee76d0feb\") " Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.216360 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/325cc77b-b7fa-435b-b6fe-332ee76d0feb-logs" (OuterVolumeSpecName: "logs") pod "325cc77b-b7fa-435b-b6fe-332ee76d0feb" (UID: "325cc77b-b7fa-435b-b6fe-332ee76d0feb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.245147 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/325cc77b-b7fa-435b-b6fe-332ee76d0feb-kube-api-access-qdwcz" (OuterVolumeSpecName: "kube-api-access-qdwcz") pod "325cc77b-b7fa-435b-b6fe-332ee76d0feb" (UID: "325cc77b-b7fa-435b-b6fe-332ee76d0feb"). InnerVolumeSpecName "kube-api-access-qdwcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.261909 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "325cc77b-b7fa-435b-b6fe-332ee76d0feb" (UID: "325cc77b-b7fa-435b-b6fe-332ee76d0feb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.276452 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-config-data" (OuterVolumeSpecName: "config-data") pod "325cc77b-b7fa-435b-b6fe-332ee76d0feb" (UID: "325cc77b-b7fa-435b-b6fe-332ee76d0feb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.286947 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "325cc77b-b7fa-435b-b6fe-332ee76d0feb" (UID: "325cc77b-b7fa-435b-b6fe-332ee76d0feb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.312089 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "325cc77b-b7fa-435b-b6fe-332ee76d0feb" (UID: "325cc77b-b7fa-435b-b6fe-332ee76d0feb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.315647 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-combined-ca-bundle\") pod \"a8bf50ee-a389-4a35-8899-81d885e1ec38\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.315912 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-config-data\") pod \"a8bf50ee-a389-4a35-8899-81d885e1ec38\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.316230 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85xhh\" (UniqueName: \"kubernetes.io/projected/a8bf50ee-a389-4a35-8899-81d885e1ec38-kube-api-access-85xhh\") pod \"a8bf50ee-a389-4a35-8899-81d885e1ec38\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.316899 4699 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.317001 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.317087 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.317243 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/325cc77b-b7fa-435b-b6fe-332ee76d0feb-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.317326 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdwcz\" (UniqueName: \"kubernetes.io/projected/325cc77b-b7fa-435b-b6fe-332ee76d0feb-kube-api-access-qdwcz\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.317400 4699 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/325cc77b-b7fa-435b-b6fe-332ee76d0feb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.320174 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bf50ee-a389-4a35-8899-81d885e1ec38-kube-api-access-85xhh" (OuterVolumeSpecName: "kube-api-access-85xhh") pod "a8bf50ee-a389-4a35-8899-81d885e1ec38" (UID: "a8bf50ee-a389-4a35-8899-81d885e1ec38"). InnerVolumeSpecName "kube-api-access-85xhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:32 crc kubenswrapper[4699]: E0226 11:34:32.343298 4699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-combined-ca-bundle podName:a8bf50ee-a389-4a35-8899-81d885e1ec38 nodeName:}" failed. No retries permitted until 2026-02-26 11:34:32.84326515 +0000 UTC m=+1418.654091584 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-combined-ca-bundle") pod "a8bf50ee-a389-4a35-8899-81d885e1ec38" (UID: "a8bf50ee-a389-4a35-8899-81d885e1ec38") : error deleting /var/lib/kubelet/pods/a8bf50ee-a389-4a35-8899-81d885e1ec38/volume-subpaths: remove /var/lib/kubelet/pods/a8bf50ee-a389-4a35-8899-81d885e1ec38/volume-subpaths: no such file or directory Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.346017 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-config-data" (OuterVolumeSpecName: "config-data") pod "a8bf50ee-a389-4a35-8899-81d885e1ec38" (UID: "a8bf50ee-a389-4a35-8899-81d885e1ec38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.419352 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85xhh\" (UniqueName: \"kubernetes.io/projected/a8bf50ee-a389-4a35-8899-81d885e1ec38-kube-api-access-85xhh\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.419385 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.500858 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.509845 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.523423 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:32 crc kubenswrapper[4699]: E0226 11:34:32.523895 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cd25a3-8ac5-49d2-b3a1-79c773a0b394" containerName="init" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.523917 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cd25a3-8ac5-49d2-b3a1-79c773a0b394" containerName="init" Feb 26 11:34:32 crc kubenswrapper[4699]: E0226 11:34:32.523930 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325cc77b-b7fa-435b-b6fe-332ee76d0feb" containerName="nova-api-api" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.523936 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="325cc77b-b7fa-435b-b6fe-332ee76d0feb" containerName="nova-api-api" Feb 26 11:34:32 crc kubenswrapper[4699]: E0226 11:34:32.523955 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bf50ee-a389-4a35-8899-81d885e1ec38" containerName="nova-scheduler-scheduler" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.523962 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bf50ee-a389-4a35-8899-81d885e1ec38" containerName="nova-scheduler-scheduler" Feb 26 11:34:32 crc kubenswrapper[4699]: E0226 11:34:32.523981 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325cc77b-b7fa-435b-b6fe-332ee76d0feb" containerName="nova-api-log" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.523987 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="325cc77b-b7fa-435b-b6fe-332ee76d0feb" containerName="nova-api-log" Feb 26 11:34:32 crc kubenswrapper[4699]: E0226 11:34:32.524002 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462a2449-2712-4bb7-9ec9-6e09a1800361" containerName="nova-manage" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.524008 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="462a2449-2712-4bb7-9ec9-6e09a1800361" containerName="nova-manage" Feb 26 11:34:32 crc kubenswrapper[4699]: E0226 11:34:32.524019 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cd25a3-8ac5-49d2-b3a1-79c773a0b394" containerName="dnsmasq-dns" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.524024 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cd25a3-8ac5-49d2-b3a1-79c773a0b394" containerName="dnsmasq-dns" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.524200 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="325cc77b-b7fa-435b-b6fe-332ee76d0feb" containerName="nova-api-log" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.524224 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="90cd25a3-8ac5-49d2-b3a1-79c773a0b394" containerName="dnsmasq-dns" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.524234 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bf50ee-a389-4a35-8899-81d885e1ec38" containerName="nova-scheduler-scheduler" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.524244 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="325cc77b-b7fa-435b-b6fe-332ee76d0feb" containerName="nova-api-api" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.524253 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="462a2449-2712-4bb7-9ec9-6e09a1800361" containerName="nova-manage" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.525913 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.528512 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.528725 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.528772 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.536598 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.622914 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-logs\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.623427 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-public-tls-certs\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.623506 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.623583 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-config-data\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.623607 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.623675 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx47p\" (UniqueName: \"kubernetes.io/projected/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-kube-api-access-bx47p\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.725328 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-public-tls-certs\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.725369 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.725399 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-config-data\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.725413 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.725442 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx47p\" (UniqueName: \"kubernetes.io/projected/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-kube-api-access-bx47p\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.725504 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-logs\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.725941 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-logs\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.729143 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-public-tls-certs\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.729742 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-config-data\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.740762 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.740838 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.743921 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx47p\" (UniqueName: \"kubernetes.io/projected/2d0d807f-7fdc-4239-b7bb-1952c2f7c222-kube-api-access-bx47p\") pod \"nova-api-0\" (UID: \"2d0d807f-7fdc-4239-b7bb-1952c2f7c222\") " pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.842809 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.929194 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-combined-ca-bundle\") pod \"a8bf50ee-a389-4a35-8899-81d885e1ec38\" (UID: \"a8bf50ee-a389-4a35-8899-81d885e1ec38\") " Feb 26 11:34:32 crc kubenswrapper[4699]: I0226 11:34:32.933561 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8bf50ee-a389-4a35-8899-81d885e1ec38" (UID: "a8bf50ee-a389-4a35-8899-81d885e1ec38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.031453 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bf50ee-a389-4a35-8899-81d885e1ec38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.131083 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a8bf50ee-a389-4a35-8899-81d885e1ec38","Type":"ContainerDied","Data":"35561c1ff93cac360e0003512da7f67e357d0a40bd9387c2cdd037287561205d"} Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.131159 4699 scope.go:117] "RemoveContainer" containerID="c97d92e712559c7220f14c08b215ac1ea015fa517ad26257d3edff7ee08e2ec0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.131101 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.192190 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.221480 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.232463 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.233928 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.238243 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.260800 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.348435 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d8371db-373f-4a41-97cb-b2d00aa17571-config-data\") pod \"nova-scheduler-0\" (UID: \"9d8371db-373f-4a41-97cb-b2d00aa17571\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.348786 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d8371db-373f-4a41-97cb-b2d00aa17571-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d8371db-373f-4a41-97cb-b2d00aa17571\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.348844 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w45jw\" (UniqueName: \"kubernetes.io/projected/9d8371db-373f-4a41-97cb-b2d00aa17571-kube-api-access-w45jw\") pod \"nova-scheduler-0\" (UID: \"9d8371db-373f-4a41-97cb-b2d00aa17571\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.381022 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.451273 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d8371db-373f-4a41-97cb-b2d00aa17571-config-data\") pod \"nova-scheduler-0\" (UID: \"9d8371db-373f-4a41-97cb-b2d00aa17571\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.451491 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d8371db-373f-4a41-97cb-b2d00aa17571-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d8371db-373f-4a41-97cb-b2d00aa17571\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.451530 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w45jw\" (UniqueName: \"kubernetes.io/projected/9d8371db-373f-4a41-97cb-b2d00aa17571-kube-api-access-w45jw\") pod \"nova-scheduler-0\" (UID: \"9d8371db-373f-4a41-97cb-b2d00aa17571\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.456794 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d8371db-373f-4a41-97cb-b2d00aa17571-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d8371db-373f-4a41-97cb-b2d00aa17571\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.457010 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d8371db-373f-4a41-97cb-b2d00aa17571-config-data\") pod \"nova-scheduler-0\" (UID: \"9d8371db-373f-4a41-97cb-b2d00aa17571\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.469091 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w45jw\" (UniqueName: \"kubernetes.io/projected/9d8371db-373f-4a41-97cb-b2d00aa17571-kube-api-access-w45jw\") pod \"nova-scheduler-0\" (UID: \"9d8371db-373f-4a41-97cb-b2d00aa17571\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:33 crc kubenswrapper[4699]: I0226 11:34:33.579518 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:34:34 crc kubenswrapper[4699]: I0226 11:34:34.025525 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:34 crc kubenswrapper[4699]: I0226 11:34:34.145726 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d0d807f-7fdc-4239-b7bb-1952c2f7c222","Type":"ContainerStarted","Data":"7d3165477e1642b932da75d5ca4b8b6972b15beeee88048f905d0dbaba9ac6ea"} Feb 26 11:34:34 crc kubenswrapper[4699]: I0226 11:34:34.145774 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d0d807f-7fdc-4239-b7bb-1952c2f7c222","Type":"ContainerStarted","Data":"8027d2ce6c898378c118504a1d7fc78d863f7a78bd83a665b6126a5ea1a0d61a"} Feb 26 11:34:34 crc kubenswrapper[4699]: I0226 11:34:34.145786 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d0d807f-7fdc-4239-b7bb-1952c2f7c222","Type":"ContainerStarted","Data":"191a5ce01cbb2f558c3373d675ae1bea411b42b7fe62137febb4e27dd54ec69d"} Feb 26 11:34:34 crc kubenswrapper[4699]: I0226 11:34:34.148819 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d8371db-373f-4a41-97cb-b2d00aa17571","Type":"ContainerStarted","Data":"094dacce7e358a32d66fa5fcd4112046338df7d27740f54feecf442612c8341d"} Feb 26 11:34:34 crc kubenswrapper[4699]: I0226 11:34:34.184644 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.184619714 podStartE2EDuration="2.184619714s" podCreationTimestamp="2026-02-26 11:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:34.175898361 +0000 UTC m=+1419.986724815" watchObservedRunningTime="2026-02-26 11:34:34.184619714 +0000 UTC m=+1419.995446158" Feb 26 11:34:34 crc kubenswrapper[4699]: I0226 11:34:34.270626 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="325cc77b-b7fa-435b-b6fe-332ee76d0feb" path="/var/lib/kubelet/pods/325cc77b-b7fa-435b-b6fe-332ee76d0feb/volumes" Feb 26 11:34:34 crc kubenswrapper[4699]: I0226 11:34:34.271264 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8bf50ee-a389-4a35-8899-81d885e1ec38" path="/var/lib/kubelet/pods/a8bf50ee-a389-4a35-8899-81d885e1ec38/volumes" Feb 26 11:34:34 crc kubenswrapper[4699]: I0226 11:34:34.495071 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:41344->10.217.0.204:8775: read: connection reset by peer" Feb 26 11:34:34 crc kubenswrapper[4699]: I0226 11:34:34.495144 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:41342->10.217.0.204:8775: read: connection reset by peer" Feb 26 11:34:34 crc kubenswrapper[4699]: I0226 11:34:34.960551 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.085021 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-config-data\") pod \"c847caf4-446a-4738-88a8-26d1628c91f7\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.085092 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c847caf4-446a-4738-88a8-26d1628c91f7-logs\") pod \"c847caf4-446a-4738-88a8-26d1628c91f7\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.085110 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj8g6\" (UniqueName: \"kubernetes.io/projected/c847caf4-446a-4738-88a8-26d1628c91f7-kube-api-access-rj8g6\") pod \"c847caf4-446a-4738-88a8-26d1628c91f7\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.085214 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-nova-metadata-tls-certs\") pod \"c847caf4-446a-4738-88a8-26d1628c91f7\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.085265 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-combined-ca-bundle\") pod \"c847caf4-446a-4738-88a8-26d1628c91f7\" (UID: \"c847caf4-446a-4738-88a8-26d1628c91f7\") " Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.086403 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c847caf4-446a-4738-88a8-26d1628c91f7-logs" (OuterVolumeSpecName: "logs") pod "c847caf4-446a-4738-88a8-26d1628c91f7" (UID: "c847caf4-446a-4738-88a8-26d1628c91f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.102418 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c847caf4-446a-4738-88a8-26d1628c91f7-kube-api-access-rj8g6" (OuterVolumeSpecName: "kube-api-access-rj8g6") pod "c847caf4-446a-4738-88a8-26d1628c91f7" (UID: "c847caf4-446a-4738-88a8-26d1628c91f7"). InnerVolumeSpecName "kube-api-access-rj8g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.120785 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c847caf4-446a-4738-88a8-26d1628c91f7" (UID: "c847caf4-446a-4738-88a8-26d1628c91f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.124700 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-config-data" (OuterVolumeSpecName: "config-data") pod "c847caf4-446a-4738-88a8-26d1628c91f7" (UID: "c847caf4-446a-4738-88a8-26d1628c91f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.174764 4699 generic.go:334] "Generic (PLEG): container finished" podID="c847caf4-446a-4738-88a8-26d1628c91f7" containerID="36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81" exitCode=0 Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.175076 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c847caf4-446a-4738-88a8-26d1628c91f7","Type":"ContainerDied","Data":"36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81"} Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.175103 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c847caf4-446a-4738-88a8-26d1628c91f7","Type":"ContainerDied","Data":"3e0522c5f3203ed953a2188504efca913e34205ca0ad66ee7025a214978a16b0"} Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.175137 4699 scope.go:117] "RemoveContainer" containerID="36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.175260 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.181439 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d8371db-373f-4a41-97cb-b2d00aa17571","Type":"ContainerStarted","Data":"c9f59789fb46f140c14a75b396fc6615b2fefe16f21f2f8d660f04f539768d82"} Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.184913 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c847caf4-446a-4738-88a8-26d1628c91f7" (UID: "c847caf4-446a-4738-88a8-26d1628c91f7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.187759 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.187783 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.187813 4699 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c847caf4-446a-4738-88a8-26d1628c91f7-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.187824 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj8g6\" (UniqueName: \"kubernetes.io/projected/c847caf4-446a-4738-88a8-26d1628c91f7-kube-api-access-rj8g6\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.187833 4699 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c847caf4-446a-4738-88a8-26d1628c91f7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.199778 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.199758436 podStartE2EDuration="2.199758436s" podCreationTimestamp="2026-02-26 11:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:35.1994743 +0000 UTC m=+1421.010300734" watchObservedRunningTime="2026-02-26 11:34:35.199758436 +0000 UTC m=+1421.010584870" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.213933 4699 scope.go:117] "RemoveContainer" containerID="7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.236489 4699 scope.go:117] "RemoveContainer" containerID="36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81" Feb 26 11:34:35 crc kubenswrapper[4699]: E0226 11:34:35.237006 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81\": container with ID starting with 36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81 not found: ID does not exist" containerID="36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.237037 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81"} err="failed to get container status \"36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81\": rpc error: code = NotFound desc = could not find container \"36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81\": container with ID starting with 36d1cb8833b256faa223eaa6a7298ffa4e21878d531a7626347d9cfebc03ef81 not found: ID does not exist" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.237065 4699 scope.go:117] "RemoveContainer" containerID="7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa" Feb 26 11:34:35 crc kubenswrapper[4699]: E0226 11:34:35.237703 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa\": container with ID starting with 7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa not found: ID does not exist" containerID="7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.237761 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa"} err="failed to get container status \"7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa\": rpc error: code = NotFound desc = could not find container \"7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa\": container with ID starting with 7421fb2591f05611aff351133a3564dd072d71f5d14ccfd25934a2a6a477aeaa not found: ID does not exist" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.519991 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.541103 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.564208 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:35 crc kubenswrapper[4699]: E0226 11:34:35.564950 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-metadata" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.564976 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-metadata" Feb 26 11:34:35 crc kubenswrapper[4699]: E0226 11:34:35.564989 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-log" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.564999 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-log" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.565317 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-log" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.565347 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" containerName="nova-metadata-metadata" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.567020 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.576308 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.581915 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.583609 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.708884 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15752dfa-4afb-412f-99a0-75c5fe76f6a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.708963 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q66wh\" (UniqueName: \"kubernetes.io/projected/15752dfa-4afb-412f-99a0-75c5fe76f6a8-kube-api-access-q66wh\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.709000 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15752dfa-4afb-412f-99a0-75c5fe76f6a8-logs\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.709041 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/15752dfa-4afb-412f-99a0-75c5fe76f6a8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.709148 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15752dfa-4afb-412f-99a0-75c5fe76f6a8-config-data\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.810414 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15752dfa-4afb-412f-99a0-75c5fe76f6a8-logs\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.810499 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/15752dfa-4afb-412f-99a0-75c5fe76f6a8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.810617 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15752dfa-4afb-412f-99a0-75c5fe76f6a8-config-data\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.810666 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15752dfa-4afb-412f-99a0-75c5fe76f6a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.810719 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q66wh\" (UniqueName: \"kubernetes.io/projected/15752dfa-4afb-412f-99a0-75c5fe76f6a8-kube-api-access-q66wh\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.811982 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15752dfa-4afb-412f-99a0-75c5fe76f6a8-logs\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.816465 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/15752dfa-4afb-412f-99a0-75c5fe76f6a8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.816494 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15752dfa-4afb-412f-99a0-75c5fe76f6a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.818002 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15752dfa-4afb-412f-99a0-75c5fe76f6a8-config-data\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.834401 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q66wh\" (UniqueName: \"kubernetes.io/projected/15752dfa-4afb-412f-99a0-75c5fe76f6a8-kube-api-access-q66wh\") pod \"nova-metadata-0\" (UID: \"15752dfa-4afb-412f-99a0-75c5fe76f6a8\") " pod="openstack/nova-metadata-0" Feb 26 11:34:35 crc kubenswrapper[4699]: I0226 11:34:35.907626 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:36 crc kubenswrapper[4699]: I0226 11:34:36.272828 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c847caf4-446a-4738-88a8-26d1628c91f7" path="/var/lib/kubelet/pods/c847caf4-446a-4738-88a8-26d1628c91f7/volumes" Feb 26 11:34:36 crc kubenswrapper[4699]: I0226 11:34:36.424255 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:37 crc kubenswrapper[4699]: I0226 11:34:37.204806 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15752dfa-4afb-412f-99a0-75c5fe76f6a8","Type":"ContainerStarted","Data":"09054b285959da7487a1e768db02c33e9684f2433939813bfe30bacc02103ce0"} Feb 26 11:34:37 crc kubenswrapper[4699]: I0226 11:34:37.205198 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15752dfa-4afb-412f-99a0-75c5fe76f6a8","Type":"ContainerStarted","Data":"8b667388cc95abb45a06e9af25477109dd583ef5c939684b9cf29a0ee0fc1ffc"} Feb 26 11:34:37 crc kubenswrapper[4699]: I0226 11:34:37.205215 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15752dfa-4afb-412f-99a0-75c5fe76f6a8","Type":"ContainerStarted","Data":"956e2a368ad68123e08c1b9457041b4b83f645c3c2d4c939e03e4a34a9fc3016"} Feb 26 11:34:37 crc kubenswrapper[4699]: I0226 11:34:37.219829 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.219806155 podStartE2EDuration="2.219806155s" podCreationTimestamp="2026-02-26 11:34:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:37.218377401 +0000 UTC m=+1423.029203855" watchObservedRunningTime="2026-02-26 11:34:37.219806155 +0000 UTC m=+1423.030632599" Feb 26 11:34:38 crc kubenswrapper[4699]: I0226 11:34:38.580085 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 11:34:40 crc kubenswrapper[4699]: I0226 11:34:40.908336 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:34:40 crc kubenswrapper[4699]: I0226 11:34:40.908971 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:34:42 crc kubenswrapper[4699]: I0226 11:34:42.843065 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:34:42 crc kubenswrapper[4699]: I0226 11:34:42.843161 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:34:43 crc kubenswrapper[4699]: I0226 11:34:43.580691 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 11:34:43 crc kubenswrapper[4699]: I0226 11:34:43.613464 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 11:34:43 crc kubenswrapper[4699]: I0226 11:34:43.854283 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2d0d807f-7fdc-4239-b7bb-1952c2f7c222" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.217:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:43 crc kubenswrapper[4699]: I0226 11:34:43.854339 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2d0d807f-7fdc-4239-b7bb-1952c2f7c222" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.217:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:44 crc kubenswrapper[4699]: I0226 11:34:44.303371 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 11:34:45 crc kubenswrapper[4699]: I0226 11:34:45.907983 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 11:34:45 crc kubenswrapper[4699]: I0226 11:34:45.908083 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 11:34:46 crc kubenswrapper[4699]: I0226 11:34:46.922277 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="15752dfa-4afb-412f-99a0-75c5fe76f6a8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:46 crc kubenswrapper[4699]: I0226 11:34:46.922373 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="15752dfa-4afb-412f-99a0-75c5fe76f6a8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:51 crc kubenswrapper[4699]: I0226 11:34:51.495614 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 11:34:52 crc kubenswrapper[4699]: I0226 11:34:52.849717 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 11:34:52 crc kubenswrapper[4699]: I0226 11:34:52.850294 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 11:34:52 crc kubenswrapper[4699]: I0226 11:34:52.850412 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 11:34:52 crc kubenswrapper[4699]: I0226 11:34:52.855786 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 11:34:53 crc kubenswrapper[4699]: I0226 11:34:53.383825 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 11:34:53 crc kubenswrapper[4699]: I0226 11:34:53.390021 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 11:34:55 crc kubenswrapper[4699]: I0226 11:34:55.914249 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 11:34:55 crc kubenswrapper[4699]: I0226 11:34:55.916008 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 11:34:55 crc kubenswrapper[4699]: I0226 11:34:55.923498 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 11:34:56 crc kubenswrapper[4699]: I0226 11:34:56.419521 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 11:35:03 crc kubenswrapper[4699]: I0226 11:35:03.966339 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:35:04 crc kubenswrapper[4699]: I0226 11:35:04.764662 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:35:08 crc kubenswrapper[4699]: I0226 11:35:08.489885 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f4a652b4-5b96-4ebf-81b4-df92846455bd" containerName="rabbitmq" containerID="cri-o://5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7" gracePeriod=604796 Feb 26 11:35:08 crc kubenswrapper[4699]: I0226 11:35:08.833570 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2d57084d-dc87-44e4-bbc8-50c402b7165b" containerName="rabbitmq" containerID="cri-o://34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625" gracePeriod=604796 Feb 26 11:35:12 crc kubenswrapper[4699]: I0226 11:35:12.336473 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f4a652b4-5b96-4ebf-81b4-df92846455bd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Feb 26 11:35:12 crc kubenswrapper[4699]: I0226 11:35:12.389941 4699 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2d57084d-dc87-44e4-bbc8-50c402b7165b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.217993 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.370217 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f4a652b4-5b96-4ebf-81b4-df92846455bd-pod-info\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.370349 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-erlang-cookie\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.370414 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-confd\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.370438 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42595\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-kube-api-access-42595\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.370524 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-tls\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.370564 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-plugins-conf\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.370591 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-server-conf\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.370633 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-plugins\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.370678 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f4a652b4-5b96-4ebf-81b4-df92846455bd-erlang-cookie-secret\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.370734 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-config-data\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.370768 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.372133 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.372154 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.372387 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.378675 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.381434 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.394071 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-kube-api-access-42595" (OuterVolumeSpecName: "kube-api-access-42595") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "kube-api-access-42595". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.397593 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a652b4-5b96-4ebf-81b4-df92846455bd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.397683 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f4a652b4-5b96-4ebf-81b4-df92846455bd-pod-info" (OuterVolumeSpecName: "pod-info") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.422692 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-config-data" (OuterVolumeSpecName: "config-data") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.472408 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-server-conf" (OuterVolumeSpecName: "server-conf") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.472555 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-server-conf\") pod \"f4a652b4-5b96-4ebf-81b4-df92846455bd\" (UID: \"f4a652b4-5b96-4ebf-81b4-df92846455bd\") " Feb 26 11:35:15 crc kubenswrapper[4699]: W0226 11:35:15.472906 4699 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f4a652b4-5b96-4ebf-81b4-df92846455bd/volumes/kubernetes.io~configmap/server-conf Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.472924 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-server-conf" (OuterVolumeSpecName: "server-conf") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.473231 4699 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f4a652b4-5b96-4ebf-81b4-df92846455bd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.473251 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.473273 4699 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.473282 4699 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f4a652b4-5b96-4ebf-81b4-df92846455bd-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.473292 4699 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.473303 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42595\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-kube-api-access-42595\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.473311 4699 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.473319 4699 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.473327 4699 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.491499 4699 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.549676 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.551485 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f4a652b4-5b96-4ebf-81b4-df92846455bd" (UID: "f4a652b4-5b96-4ebf-81b4-df92846455bd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.588610 4699 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f4a652b4-5b96-4ebf-81b4-df92846455bd-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.588657 4699 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.588671 4699 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f4a652b4-5b96-4ebf-81b4-df92846455bd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.670990 4699 generic.go:334] "Generic (PLEG): container finished" podID="f4a652b4-5b96-4ebf-81b4-df92846455bd" containerID="5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7" exitCode=0 Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.671074 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f4a652b4-5b96-4ebf-81b4-df92846455bd","Type":"ContainerDied","Data":"5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7"} Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.671102 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f4a652b4-5b96-4ebf-81b4-df92846455bd","Type":"ContainerDied","Data":"c653f2114aeba63b01bf441458d5ec8f8a6f7c0f66f8ee44c878928901c377ac"} Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.671247 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.671293 4699 scope.go:117] "RemoveContainer" containerID="5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.686976 4699 generic.go:334] "Generic (PLEG): container finished" podID="2d57084d-dc87-44e4-bbc8-50c402b7165b" containerID="34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625" exitCode=0 Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.687018 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d57084d-dc87-44e4-bbc8-50c402b7165b","Type":"ContainerDied","Data":"34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625"} Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.687043 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d57084d-dc87-44e4-bbc8-50c402b7165b","Type":"ContainerDied","Data":"6c8df8aa27d02e0ceb8002bd8f20b8b521706c7be8fe88b152c705914906b7ae"} Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.687104 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689077 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-plugins\") pod \"2d57084d-dc87-44e4-bbc8-50c402b7165b\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689139 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-config-data\") pod \"2d57084d-dc87-44e4-bbc8-50c402b7165b\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689177 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d57084d-dc87-44e4-bbc8-50c402b7165b-erlang-cookie-secret\") pod \"2d57084d-dc87-44e4-bbc8-50c402b7165b\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689269 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"2d57084d-dc87-44e4-bbc8-50c402b7165b\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689313 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-confd\") pod \"2d57084d-dc87-44e4-bbc8-50c402b7165b\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689339 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-plugins-conf\") pod \"2d57084d-dc87-44e4-bbc8-50c402b7165b\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689362 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d57084d-dc87-44e4-bbc8-50c402b7165b-pod-info\") pod \"2d57084d-dc87-44e4-bbc8-50c402b7165b\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689422 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-erlang-cookie\") pod \"2d57084d-dc87-44e4-bbc8-50c402b7165b\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689453 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-tls\") pod \"2d57084d-dc87-44e4-bbc8-50c402b7165b\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689517 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-server-conf\") pod \"2d57084d-dc87-44e4-bbc8-50c402b7165b\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689565 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz7xp\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-kube-api-access-pz7xp\") pod \"2d57084d-dc87-44e4-bbc8-50c402b7165b\" (UID: \"2d57084d-dc87-44e4-bbc8-50c402b7165b\") " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689586 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2d57084d-dc87-44e4-bbc8-50c402b7165b" (UID: "2d57084d-dc87-44e4-bbc8-50c402b7165b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.689904 4699 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.690100 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2d57084d-dc87-44e4-bbc8-50c402b7165b" (UID: "2d57084d-dc87-44e4-bbc8-50c402b7165b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.690133 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2d57084d-dc87-44e4-bbc8-50c402b7165b" (UID: "2d57084d-dc87-44e4-bbc8-50c402b7165b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.692882 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2d57084d-dc87-44e4-bbc8-50c402b7165b-pod-info" (OuterVolumeSpecName: "pod-info") pod "2d57084d-dc87-44e4-bbc8-50c402b7165b" (UID: "2d57084d-dc87-44e4-bbc8-50c402b7165b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.693326 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-kube-api-access-pz7xp" (OuterVolumeSpecName: "kube-api-access-pz7xp") pod "2d57084d-dc87-44e4-bbc8-50c402b7165b" (UID: "2d57084d-dc87-44e4-bbc8-50c402b7165b"). InnerVolumeSpecName "kube-api-access-pz7xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.694470 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "2d57084d-dc87-44e4-bbc8-50c402b7165b" (UID: "2d57084d-dc87-44e4-bbc8-50c402b7165b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.698804 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2d57084d-dc87-44e4-bbc8-50c402b7165b" (UID: "2d57084d-dc87-44e4-bbc8-50c402b7165b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.718283 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d57084d-dc87-44e4-bbc8-50c402b7165b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2d57084d-dc87-44e4-bbc8-50c402b7165b" (UID: "2d57084d-dc87-44e4-bbc8-50c402b7165b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.722604 4699 scope.go:117] "RemoveContainer" containerID="01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.733666 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.754102 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-config-data" (OuterVolumeSpecName: "config-data") pod "2d57084d-dc87-44e4-bbc8-50c402b7165b" (UID: "2d57084d-dc87-44e4-bbc8-50c402b7165b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.759604 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.782585 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:35:15 crc kubenswrapper[4699]: E0226 11:35:15.783047 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d57084d-dc87-44e4-bbc8-50c402b7165b" containerName="rabbitmq" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.783060 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d57084d-dc87-44e4-bbc8-50c402b7165b" containerName="rabbitmq" Feb 26 11:35:15 crc kubenswrapper[4699]: E0226 11:35:15.783080 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a652b4-5b96-4ebf-81b4-df92846455bd" containerName="setup-container" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.783086 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a652b4-5b96-4ebf-81b4-df92846455bd" containerName="setup-container" Feb 26 11:35:15 crc kubenswrapper[4699]: E0226 11:35:15.783100 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a652b4-5b96-4ebf-81b4-df92846455bd" containerName="rabbitmq" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.783106 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a652b4-5b96-4ebf-81b4-df92846455bd" containerName="rabbitmq" Feb 26 11:35:15 crc kubenswrapper[4699]: E0226 11:35:15.783147 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d57084d-dc87-44e4-bbc8-50c402b7165b" containerName="setup-container" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.783153 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d57084d-dc87-44e4-bbc8-50c402b7165b" containerName="setup-container" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.783332 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d57084d-dc87-44e4-bbc8-50c402b7165b" containerName="rabbitmq" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.783348 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a652b4-5b96-4ebf-81b4-df92846455bd" containerName="rabbitmq" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.784722 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.789714 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.790061 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.790233 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.790393 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.790507 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.790557 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.790738 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-g9kcp" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.791983 4699 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.792009 4699 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.792019 4699 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d57084d-dc87-44e4-bbc8-50c402b7165b-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.792028 4699 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.792037 4699 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.792046 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz7xp\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-kube-api-access-pz7xp\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.792054 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.792063 4699 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d57084d-dc87-44e4-bbc8-50c402b7165b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.801909 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.832584 4699 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.847619 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-server-conf" (OuterVolumeSpecName: "server-conf") pod "2d57084d-dc87-44e4-bbc8-50c402b7165b" (UID: "2d57084d-dc87-44e4-bbc8-50c402b7165b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.861862 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2d57084d-dc87-44e4-bbc8-50c402b7165b" (UID: "2d57084d-dc87-44e4-bbc8-50c402b7165b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.863726 4699 scope.go:117] "RemoveContainer" containerID="5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7" Feb 26 11:35:15 crc kubenswrapper[4699]: E0226 11:35:15.864238 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7\": container with ID starting with 5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7 not found: ID does not exist" containerID="5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.864323 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7"} err="failed to get container status \"5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7\": rpc error: code = NotFound desc = could not find container \"5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7\": container with ID starting with 5338446d199768b870e8acd23a871db21784e94ef89f9e11e4dc4c4cebdfd0f7 not found: ID does not exist" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.864354 4699 scope.go:117] "RemoveContainer" containerID="01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f" Feb 26 11:35:15 crc kubenswrapper[4699]: E0226 11:35:15.864658 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f\": container with ID starting with 01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f not found: ID does not exist" containerID="01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.864692 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f"} err="failed to get container status \"01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f\": rpc error: code = NotFound desc = could not find container \"01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f\": container with ID starting with 01e22f9568e3ffe75e59f3efb64eaf87a7d36c70e421e1930b11972704fa1c7f not found: ID does not exist" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.864715 4699 scope.go:117] "RemoveContainer" containerID="34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.894215 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.894460 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.894529 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.894613 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d9b2e6e-c43b-49ae-a71e-844610621e3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.894707 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5nf4\" (UniqueName: \"kubernetes.io/projected/0d9b2e6e-c43b-49ae-a71e-844610621e3e-kube-api-access-x5nf4\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.894755 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d9b2e6e-c43b-49ae-a71e-844610621e3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.894788 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.894989 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.895074 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d9b2e6e-c43b-49ae-a71e-844610621e3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.895131 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d9b2e6e-c43b-49ae-a71e-844610621e3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.895255 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d9b2e6e-c43b-49ae-a71e-844610621e3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.895432 4699 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d57084d-dc87-44e4-bbc8-50c402b7165b-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.895450 4699 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.895460 4699 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d57084d-dc87-44e4-bbc8-50c402b7165b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.937226 4699 scope.go:117] "RemoveContainer" containerID="4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.967790 4699 scope.go:117] "RemoveContainer" containerID="34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625" Feb 26 11:35:15 crc kubenswrapper[4699]: E0226 11:35:15.968452 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625\": container with ID starting with 34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625 not found: ID does not exist" containerID="34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.968496 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625"} err="failed to get container status \"34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625\": rpc error: code = NotFound desc = could not find container \"34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625\": container with ID starting with 34a964fb2885f46c40a593f7964e9249ef84fe7b3d8685d54bd17236234cc625 not found: ID does not exist" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.968524 4699 scope.go:117] "RemoveContainer" containerID="4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629" Feb 26 11:35:15 crc kubenswrapper[4699]: E0226 11:35:15.968834 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629\": container with ID starting with 4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629 not found: ID does not exist" containerID="4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.968867 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629"} err="failed to get container status \"4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629\": rpc error: code = NotFound desc = could not find container \"4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629\": container with ID starting with 4d1eb8a447a752838461f0d636790cc5233f2885f6f184a34b386abf15a99629 not found: ID does not exist" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.997362 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5nf4\" (UniqueName: \"kubernetes.io/projected/0d9b2e6e-c43b-49ae-a71e-844610621e3e-kube-api-access-x5nf4\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.998024 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d9b2e6e-c43b-49ae-a71e-844610621e3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.998074 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.998138 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.998174 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d9b2e6e-c43b-49ae-a71e-844610621e3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.998204 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d9b2e6e-c43b-49ae-a71e-844610621e3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.998289 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d9b2e6e-c43b-49ae-a71e-844610621e3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.998382 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.998453 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.998499 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.998555 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d9b2e6e-c43b-49ae-a71e-844610621e3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.999000 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d9b2e6e-c43b-49ae-a71e-844610621e3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.999208 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.999533 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.999629 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:15 crc kubenswrapper[4699]: I0226 11:35:15.999795 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d9b2e6e-c43b-49ae-a71e-844610621e3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.000551 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d9b2e6e-c43b-49ae-a71e-844610621e3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.002740 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.003062 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d9b2e6e-c43b-49ae-a71e-844610621e3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.003364 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d9b2e6e-c43b-49ae-a71e-844610621e3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.006011 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d9b2e6e-c43b-49ae-a71e-844610621e3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.023026 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5nf4\" (UniqueName: \"kubernetes.io/projected/0d9b2e6e-c43b-49ae-a71e-844610621e3e-kube-api-access-x5nf4\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.041344 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0d9b2e6e-c43b-49ae-a71e-844610621e3e\") " pod="openstack/rabbitmq-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.047286 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.058676 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.069475 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.071435 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.076813 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.076954 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.077088 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.077155 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.077233 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.077096 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.077359 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mp8r4" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.078466 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.202273 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b731314-eb90-4a19-a425-2f9282af2a7f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.202325 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.202361 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b731314-eb90-4a19-a425-2f9282af2a7f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.202385 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.202464 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b731314-eb90-4a19-a425-2f9282af2a7f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.202488 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.202507 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b731314-eb90-4a19-a425-2f9282af2a7f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.202528 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4f6j\" (UniqueName: \"kubernetes.io/projected/3b731314-eb90-4a19-a425-2f9282af2a7f-kube-api-access-z4f6j\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.202552 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.202754 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.202789 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b731314-eb90-4a19-a425-2f9282af2a7f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.234802 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.282241 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d57084d-dc87-44e4-bbc8-50c402b7165b" path="/var/lib/kubelet/pods/2d57084d-dc87-44e4-bbc8-50c402b7165b/volumes" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.283195 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a652b4-5b96-4ebf-81b4-df92846455bd" path="/var/lib/kubelet/pods/f4a652b4-5b96-4ebf-81b4-df92846455bd/volumes" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.304282 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.304348 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b731314-eb90-4a19-a425-2f9282af2a7f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.305181 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b731314-eb90-4a19-a425-2f9282af2a7f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.305241 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b731314-eb90-4a19-a425-2f9282af2a7f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.305294 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.305583 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.305626 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b731314-eb90-4a19-a425-2f9282af2a7f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.305674 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.305837 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.306057 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b731314-eb90-4a19-a425-2f9282af2a7f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.306151 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.306184 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b731314-eb90-4a19-a425-2f9282af2a7f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.306219 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4f6j\" (UniqueName: \"kubernetes.io/projected/3b731314-eb90-4a19-a425-2f9282af2a7f-kube-api-access-z4f6j\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.306279 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.306668 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b731314-eb90-4a19-a425-2f9282af2a7f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.306848 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.306992 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b731314-eb90-4a19-a425-2f9282af2a7f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.308409 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.309460 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b731314-eb90-4a19-a425-2f9282af2a7f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.310047 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b731314-eb90-4a19-a425-2f9282af2a7f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.310053 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b731314-eb90-4a19-a425-2f9282af2a7f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.323029 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4f6j\" (UniqueName: \"kubernetes.io/projected/3b731314-eb90-4a19-a425-2f9282af2a7f-kube-api-access-z4f6j\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.344674 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b731314-eb90-4a19-a425-2f9282af2a7f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.396938 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.728170 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.905300 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:35:16 crc kubenswrapper[4699]: W0226 11:35:16.908216 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b731314_eb90_4a19_a425_2f9282af2a7f.slice/crio-a46589663d794710d4c07fab7c01ba86c58a47a81946b5e4edd54fdd8b063a57 WatchSource:0}: Error finding container a46589663d794710d4c07fab7c01ba86c58a47a81946b5e4edd54fdd8b063a57: Status 404 returned error can't find the container with id a46589663d794710d4c07fab7c01ba86c58a47a81946b5e4edd54fdd8b063a57 Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.974907 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-5898z"] Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.976790 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:16 crc kubenswrapper[4699]: I0226 11:35:16.978881 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.029826 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-5898z"] Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.127254 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-config\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.127319 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-svc\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.127349 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.127380 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tchk9\" (UniqueName: \"kubernetes.io/projected/390537ad-fb8f-417c-9577-c6958c371659-kube-api-access-tchk9\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.127480 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.127507 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.127560 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.173795 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-5898z"] Feb 26 11:35:17 crc kubenswrapper[4699]: E0226 11:35:17.174545 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-tchk9 openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5576978c7c-5898z" podUID="390537ad-fb8f-417c-9577-c6958c371659" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.240658 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-hddfn"] Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.242313 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.244060 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.244106 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.244164 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.244224 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-config\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.244250 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-svc\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.244272 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.244293 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tchk9\" (UniqueName: \"kubernetes.io/projected/390537ad-fb8f-417c-9577-c6958c371659-kube-api-access-tchk9\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.245446 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.245917 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-config\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.246366 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-svc\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.246717 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.247108 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.247214 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.280356 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-hddfn"] Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.290182 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tchk9\" (UniqueName: \"kubernetes.io/projected/390537ad-fb8f-417c-9577-c6958c371659-kube-api-access-tchk9\") pod \"dnsmasq-dns-5576978c7c-5898z\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.346171 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.346254 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.346322 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.346345 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-config\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.346369 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.346384 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.346419 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtm4\" (UniqueName: \"kubernetes.io/projected/24dd88a8-4737-4ebc-8925-b2bcedb760c2-kube-api-access-7vtm4\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.447789 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.447846 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.447906 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtm4\" (UniqueName: \"kubernetes.io/projected/24dd88a8-4737-4ebc-8925-b2bcedb760c2-kube-api-access-7vtm4\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.448015 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.448062 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.448166 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.448198 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-config\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.448876 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.448930 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.449005 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.449098 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.449224 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.449431 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24dd88a8-4737-4ebc-8925-b2bcedb760c2-config\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.528239 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtm4\" (UniqueName: \"kubernetes.io/projected/24dd88a8-4737-4ebc-8925-b2bcedb760c2-kube-api-access-7vtm4\") pod \"dnsmasq-dns-8c6f6df99-hddfn\" (UID: \"24dd88a8-4737-4ebc-8925-b2bcedb760c2\") " pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.591585 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.708708 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d9b2e6e-c43b-49ae-a71e-844610621e3e","Type":"ContainerStarted","Data":"5aad5beafa395051116bd03caf5da12524f4ae6cf970c26fa65a71dc636e2c06"} Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.710197 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b731314-eb90-4a19-a425-2f9282af2a7f","Type":"ContainerStarted","Data":"a46589663d794710d4c07fab7c01ba86c58a47a81946b5e4edd54fdd8b063a57"} Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.710217 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.812092 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.957573 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-svc\") pod \"390537ad-fb8f-417c-9577-c6958c371659\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.958052 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-openstack-edpm-ipam\") pod \"390537ad-fb8f-417c-9577-c6958c371659\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.958134 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tchk9\" (UniqueName: \"kubernetes.io/projected/390537ad-fb8f-417c-9577-c6958c371659-kube-api-access-tchk9\") pod \"390537ad-fb8f-417c-9577-c6958c371659\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.958262 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-config\") pod \"390537ad-fb8f-417c-9577-c6958c371659\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.958271 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "390537ad-fb8f-417c-9577-c6958c371659" (UID: "390537ad-fb8f-417c-9577-c6958c371659"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.958337 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-nb\") pod \"390537ad-fb8f-417c-9577-c6958c371659\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.958588 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "390537ad-fb8f-417c-9577-c6958c371659" (UID: "390537ad-fb8f-417c-9577-c6958c371659"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.958825 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-swift-storage-0\") pod \"390537ad-fb8f-417c-9577-c6958c371659\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.958895 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-sb\") pod \"390537ad-fb8f-417c-9577-c6958c371659\" (UID: \"390537ad-fb8f-417c-9577-c6958c371659\") " Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.959034 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "390537ad-fb8f-417c-9577-c6958c371659" (UID: "390537ad-fb8f-417c-9577-c6958c371659"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.959450 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "390537ad-fb8f-417c-9577-c6958c371659" (UID: "390537ad-fb8f-417c-9577-c6958c371659"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.959514 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.959523 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "390537ad-fb8f-417c-9577-c6958c371659" (UID: "390537ad-fb8f-417c-9577-c6958c371659"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.959532 4699 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.959563 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:17 crc kubenswrapper[4699]: I0226 11:35:17.959593 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-config" (OuterVolumeSpecName: "config") pod "390537ad-fb8f-417c-9577-c6958c371659" (UID: "390537ad-fb8f-417c-9577-c6958c371659"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.061897 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.061964 4699 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.061986 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/390537ad-fb8f-417c-9577-c6958c371659-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.126238 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/390537ad-fb8f-417c-9577-c6958c371659-kube-api-access-tchk9" (OuterVolumeSpecName: "kube-api-access-tchk9") pod "390537ad-fb8f-417c-9577-c6958c371659" (UID: "390537ad-fb8f-417c-9577-c6958c371659"). InnerVolumeSpecName "kube-api-access-tchk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:18 crc kubenswrapper[4699]: W0226 11:35:18.162924 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24dd88a8_4737_4ebc_8925_b2bcedb760c2.slice/crio-61314fbd31e9e12163888c8229d0a18a04f98696e27f58cce31b7c12f38dec1f WatchSource:0}: Error finding container 61314fbd31e9e12163888c8229d0a18a04f98696e27f58cce31b7c12f38dec1f: Status 404 returned error can't find the container with id 61314fbd31e9e12163888c8229d0a18a04f98696e27f58cce31b7c12f38dec1f Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.163226 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tchk9\" (UniqueName: \"kubernetes.io/projected/390537ad-fb8f-417c-9577-c6958c371659-kube-api-access-tchk9\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.163783 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-hddfn"] Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.788016 4699 generic.go:334] "Generic (PLEG): container finished" podID="24dd88a8-4737-4ebc-8925-b2bcedb760c2" containerID="957c1998dc4811d9f911eb451c4fa82b1cc78906876fe523cf44f5d6bff01ae4" exitCode=0 Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.788169 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" event={"ID":"24dd88a8-4737-4ebc-8925-b2bcedb760c2","Type":"ContainerDied","Data":"957c1998dc4811d9f911eb451c4fa82b1cc78906876fe523cf44f5d6bff01ae4"} Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.789357 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" event={"ID":"24dd88a8-4737-4ebc-8925-b2bcedb760c2","Type":"ContainerStarted","Data":"61314fbd31e9e12163888c8229d0a18a04f98696e27f58cce31b7c12f38dec1f"} Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.793819 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d9b2e6e-c43b-49ae-a71e-844610621e3e","Type":"ContainerStarted","Data":"55f4011887d6914b7d8dfc8eb0b5e6a2ccfc779f66663a9834966baecf2a10a6"} Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.795906 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-5898z" Feb 26 11:35:18 crc kubenswrapper[4699]: I0226 11:35:18.796615 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b731314-eb90-4a19-a425-2f9282af2a7f","Type":"ContainerStarted","Data":"f1100fd18904af6344b106082dc21d94e757513c180993bbeb69617d1198ee7b"} Feb 26 11:35:19 crc kubenswrapper[4699]: I0226 11:35:19.038602 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-5898z"] Feb 26 11:35:19 crc kubenswrapper[4699]: I0226 11:35:19.047874 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-5898z"] Feb 26 11:35:19 crc kubenswrapper[4699]: I0226 11:35:19.808833 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" event={"ID":"24dd88a8-4737-4ebc-8925-b2bcedb760c2","Type":"ContainerStarted","Data":"1e294159b0f9ba0aebb703b10b2bb5ec590973057e4b46964cbb2bd082081378"} Feb 26 11:35:19 crc kubenswrapper[4699]: I0226 11:35:19.809209 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:19 crc kubenswrapper[4699]: I0226 11:35:19.835355 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" podStartSLOduration=2.835334031 podStartE2EDuration="2.835334031s" podCreationTimestamp="2026-02-26 11:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:35:19.824409485 +0000 UTC m=+1465.635235919" watchObservedRunningTime="2026-02-26 11:35:19.835334031 +0000 UTC m=+1465.646160465" Feb 26 11:35:20 crc kubenswrapper[4699]: I0226 11:35:20.272445 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="390537ad-fb8f-417c-9577-c6958c371659" path="/var/lib/kubelet/pods/390537ad-fb8f-417c-9577-c6958c371659/volumes" Feb 26 11:35:27 crc kubenswrapper[4699]: I0226 11:35:27.593933 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8c6f6df99-hddfn" Feb 26 11:35:27 crc kubenswrapper[4699]: I0226 11:35:27.653356 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-n24ct"] Feb 26 11:35:27 crc kubenswrapper[4699]: I0226 11:35:27.653673 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" podUID="a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" containerName="dnsmasq-dns" containerID="cri-o://6eaae1ee8cf33fbf9c5f7338398f314d84ab95982df8c9ecfdd230c190623ca8" gracePeriod=10 Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.001739 4699 generic.go:334] "Generic (PLEG): container finished" podID="a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" containerID="6eaae1ee8cf33fbf9c5f7338398f314d84ab95982df8c9ecfdd230c190623ca8" exitCode=0 Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.002062 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" event={"ID":"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8","Type":"ContainerDied","Data":"6eaae1ee8cf33fbf9c5f7338398f314d84ab95982df8c9ecfdd230c190623ca8"} Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.124483 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.198187 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-sb\") pod \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.198283 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-nb\") pod \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.198332 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr9gh\" (UniqueName: \"kubernetes.io/projected/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-kube-api-access-mr9gh\") pod \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.198422 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-swift-storage-0\") pod \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.198450 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-svc\") pod \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.198544 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-config\") pod \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\" (UID: \"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8\") " Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.203452 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-kube-api-access-mr9gh" (OuterVolumeSpecName: "kube-api-access-mr9gh") pod "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" (UID: "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8"). InnerVolumeSpecName "kube-api-access-mr9gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.247317 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" (UID: "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.248384 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" (UID: "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.251523 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" (UID: "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.258452 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-config" (OuterVolumeSpecName: "config") pod "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" (UID: "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.267217 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" (UID: "a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.304385 4699 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.304422 4699 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.304433 4699 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.304443 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.304452 4699 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:28 crc kubenswrapper[4699]: I0226 11:35:28.304461 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr9gh\" (UniqueName: \"kubernetes.io/projected/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8-kube-api-access-mr9gh\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:29 crc kubenswrapper[4699]: I0226 11:35:29.012008 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" event={"ID":"a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8","Type":"ContainerDied","Data":"d1d082240eaff72440b2e6ab6682cc7abdf39c898255b3c76048247bf61866be"} Feb 26 11:35:29 crc kubenswrapper[4699]: I0226 11:35:29.012177 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-n24ct" Feb 26 11:35:29 crc kubenswrapper[4699]: I0226 11:35:29.012423 4699 scope.go:117] "RemoveContainer" containerID="6eaae1ee8cf33fbf9c5f7338398f314d84ab95982df8c9ecfdd230c190623ca8" Feb 26 11:35:29 crc kubenswrapper[4699]: I0226 11:35:29.037765 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-n24ct"] Feb 26 11:35:29 crc kubenswrapper[4699]: I0226 11:35:29.038640 4699 scope.go:117] "RemoveContainer" containerID="ac279ccad47adb2f6ab2c9bfda803625849869922644f32045786543361b143f" Feb 26 11:35:29 crc kubenswrapper[4699]: I0226 11:35:29.046891 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-n24ct"] Feb 26 11:35:30 crc kubenswrapper[4699]: I0226 11:35:30.272564 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" path="/var/lib/kubelet/pods/a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8/volumes" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.203982 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n"] Feb 26 11:35:36 crc kubenswrapper[4699]: E0226 11:35:36.205352 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" containerName="init" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.205374 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" containerName="init" Feb 26 11:35:36 crc kubenswrapper[4699]: E0226 11:35:36.205410 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" containerName="dnsmasq-dns" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.205420 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" containerName="dnsmasq-dns" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.205685 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e9b0e9-b01f-4c92-aa43-f31c9fd397f8" containerName="dnsmasq-dns" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.206714 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.210773 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.210812 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.211054 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.214959 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n"] Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.217633 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.338271 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.338331 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.338607 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfm59\" (UniqueName: \"kubernetes.io/projected/57bbec48-f33e-43b8-9f82-8cc3a42e7723-kube-api-access-kfm59\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.338813 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.441188 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.441384 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.441428 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.441572 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfm59\" (UniqueName: \"kubernetes.io/projected/57bbec48-f33e-43b8-9f82-8cc3a42e7723-kube-api-access-kfm59\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.448899 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.449140 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.453726 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.459671 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfm59\" (UniqueName: \"kubernetes.io/projected/57bbec48-f33e-43b8-9f82-8cc3a42e7723-kube-api-access-kfm59\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:36 crc kubenswrapper[4699]: I0226 11:35:36.530244 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:35:37 crc kubenswrapper[4699]: W0226 11:35:37.047912 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57bbec48_f33e_43b8_9f82_8cc3a42e7723.slice/crio-af6016a31142a78e40da5360c3d498d8cf13c0803705344a16985003cec0582e WatchSource:0}: Error finding container af6016a31142a78e40da5360c3d498d8cf13c0803705344a16985003cec0582e: Status 404 returned error can't find the container with id af6016a31142a78e40da5360c3d498d8cf13c0803705344a16985003cec0582e Feb 26 11:35:37 crc kubenswrapper[4699]: I0226 11:35:37.050373 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n"] Feb 26 11:35:37 crc kubenswrapper[4699]: I0226 11:35:37.050984 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:35:37 crc kubenswrapper[4699]: I0226 11:35:37.317373 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" event={"ID":"57bbec48-f33e-43b8-9f82-8cc3a42e7723","Type":"ContainerStarted","Data":"af6016a31142a78e40da5360c3d498d8cf13c0803705344a16985003cec0582e"} Feb 26 11:35:51 crc kubenswrapper[4699]: I0226 11:35:51.408787 4699 generic.go:334] "Generic (PLEG): container finished" podID="0d9b2e6e-c43b-49ae-a71e-844610621e3e" containerID="55f4011887d6914b7d8dfc8eb0b5e6a2ccfc779f66663a9834966baecf2a10a6" exitCode=0 Feb 26 11:35:51 crc kubenswrapper[4699]: I0226 11:35:51.408873 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d9b2e6e-c43b-49ae-a71e-844610621e3e","Type":"ContainerDied","Data":"55f4011887d6914b7d8dfc8eb0b5e6a2ccfc779f66663a9834966baecf2a10a6"} Feb 26 11:35:51 crc kubenswrapper[4699]: I0226 11:35:51.412140 4699 generic.go:334] "Generic (PLEG): container finished" podID="3b731314-eb90-4a19-a425-2f9282af2a7f" containerID="f1100fd18904af6344b106082dc21d94e757513c180993bbeb69617d1198ee7b" exitCode=0 Feb 26 11:35:51 crc kubenswrapper[4699]: I0226 11:35:51.412180 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b731314-eb90-4a19-a425-2f9282af2a7f","Type":"ContainerDied","Data":"f1100fd18904af6344b106082dc21d94e757513c180993bbeb69617d1198ee7b"} Feb 26 11:35:52 crc kubenswrapper[4699]: E0226 11:35:52.320409 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Feb 26 11:35:52 crc kubenswrapper[4699]: E0226 11:35:52.321100 4699 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 11:35:52 crc kubenswrapper[4699]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Feb 26 11:35:52 crc kubenswrapper[4699]: - hosts: all Feb 26 11:35:52 crc kubenswrapper[4699]: strategy: linear Feb 26 11:35:52 crc kubenswrapper[4699]: tasks: Feb 26 11:35:52 crc kubenswrapper[4699]: - name: Enable podified-repos Feb 26 11:35:52 crc kubenswrapper[4699]: become: true Feb 26 11:35:52 crc kubenswrapper[4699]: ansible.builtin.shell: | Feb 26 11:35:52 crc kubenswrapper[4699]: set -euxo pipefail Feb 26 11:35:52 crc kubenswrapper[4699]: pushd /var/tmp Feb 26 11:35:52 crc kubenswrapper[4699]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Feb 26 11:35:52 crc kubenswrapper[4699]: pushd repo-setup-main Feb 26 11:35:52 crc kubenswrapper[4699]: python3 -m venv ./venv Feb 26 11:35:52 crc kubenswrapper[4699]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Feb 26 11:35:52 crc kubenswrapper[4699]: ./venv/bin/repo-setup current-podified -b antelope Feb 26 11:35:52 crc kubenswrapper[4699]: popd Feb 26 11:35:52 crc kubenswrapper[4699]: rm -rf repo-setup-main Feb 26 11:35:52 crc kubenswrapper[4699]: Feb 26 11:35:52 crc kubenswrapper[4699]: Feb 26 11:35:52 crc kubenswrapper[4699]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Feb 26 11:35:52 crc kubenswrapper[4699]: edpm_override_hosts: openstack-edpm-ipam Feb 26 11:35:52 crc kubenswrapper[4699]: edpm_service_type: repo-setup Feb 26 11:35:52 crc kubenswrapper[4699]: Feb 26 11:35:52 crc kubenswrapper[4699]: Feb 26 11:35:52 crc kubenswrapper[4699]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfm59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n_openstack(57bbec48-f33e-43b8-9f82-8cc3a42e7723): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 26 11:35:52 crc kubenswrapper[4699]: > logger="UnhandledError" Feb 26 11:35:52 crc kubenswrapper[4699]: E0226 11:35:52.322366 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" podUID="57bbec48-f33e-43b8-9f82-8cc3a42e7723" Feb 26 11:35:52 crc kubenswrapper[4699]: E0226 11:35:52.421497 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" podUID="57bbec48-f33e-43b8-9f82-8cc3a42e7723" Feb 26 11:35:53 crc kubenswrapper[4699]: I0226 11:35:53.436469 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d9b2e6e-c43b-49ae-a71e-844610621e3e","Type":"ContainerStarted","Data":"081ba4071f2b3c84f8726ec4603efe137af81b7020ff19b2eeacb43719124818"} Feb 26 11:35:53 crc kubenswrapper[4699]: I0226 11:35:53.437023 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 11:35:53 crc kubenswrapper[4699]: I0226 11:35:53.441225 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b731314-eb90-4a19-a425-2f9282af2a7f","Type":"ContainerStarted","Data":"6e7541daf389c5f883354e578c9ebbd4867cbce55b6bc94a679cc8a43069d874"} Feb 26 11:35:53 crc kubenswrapper[4699]: I0226 11:35:53.442077 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:35:53 crc kubenswrapper[4699]: I0226 11:35:53.472518 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.472489024 podStartE2EDuration="38.472489024s" podCreationTimestamp="2026-02-26 11:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:35:53.458911787 +0000 UTC m=+1499.269738231" watchObservedRunningTime="2026-02-26 11:35:53.472489024 +0000 UTC m=+1499.283315458" Feb 26 11:35:53 crc kubenswrapper[4699]: I0226 11:35:53.498426 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.49840161 podStartE2EDuration="37.49840161s" podCreationTimestamp="2026-02-26 11:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:35:53.487997507 +0000 UTC m=+1499.298823961" watchObservedRunningTime="2026-02-26 11:35:53.49840161 +0000 UTC m=+1499.309228044" Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.007934 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b92vt"] Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.010837 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.019038 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b92vt"] Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.142011 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdc5p\" (UniqueName: \"kubernetes.io/projected/095e0632-b9cc-4410-af45-249da70797aa-kube-api-access-fdc5p\") pod \"redhat-operators-b92vt\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.142075 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-utilities\") pod \"redhat-operators-b92vt\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.142327 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-catalog-content\") pod \"redhat-operators-b92vt\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.244712 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-catalog-content\") pod \"redhat-operators-b92vt\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.244846 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdc5p\" (UniqueName: \"kubernetes.io/projected/095e0632-b9cc-4410-af45-249da70797aa-kube-api-access-fdc5p\") pod \"redhat-operators-b92vt\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.244890 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-utilities\") pod \"redhat-operators-b92vt\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.245407 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-catalog-content\") pod \"redhat-operators-b92vt\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.245448 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-utilities\") pod \"redhat-operators-b92vt\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.267396 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdc5p\" (UniqueName: \"kubernetes.io/projected/095e0632-b9cc-4410-af45-249da70797aa-kube-api-access-fdc5p\") pod \"redhat-operators-b92vt\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:35:55 crc kubenswrapper[4699]: I0226 11:35:55.332194 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:35:56 crc kubenswrapper[4699]: I0226 11:35:55.999656 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b92vt"] Feb 26 11:35:57 crc kubenswrapper[4699]: I0226 11:35:57.074023 4699 generic.go:334] "Generic (PLEG): container finished" podID="095e0632-b9cc-4410-af45-249da70797aa" containerID="a412e1611c3e8a9530b03f1738de0d8137ca46fac15c0f749599ab1aabddd3c3" exitCode=0 Feb 26 11:35:57 crc kubenswrapper[4699]: I0226 11:35:57.074406 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b92vt" event={"ID":"095e0632-b9cc-4410-af45-249da70797aa","Type":"ContainerDied","Data":"a412e1611c3e8a9530b03f1738de0d8137ca46fac15c0f749599ab1aabddd3c3"} Feb 26 11:35:57 crc kubenswrapper[4699]: I0226 11:35:57.074441 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b92vt" event={"ID":"095e0632-b9cc-4410-af45-249da70797aa","Type":"ContainerStarted","Data":"d12ea0f251bc41e4b956605602d54f047da25af921010667a43f8d590bf06d61"} Feb 26 11:35:59 crc kubenswrapper[4699]: I0226 11:35:59.151133 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b92vt" event={"ID":"095e0632-b9cc-4410-af45-249da70797aa","Type":"ContainerStarted","Data":"b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6"} Feb 26 11:36:00 crc kubenswrapper[4699]: I0226 11:36:00.157408 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535096-xr7rk"] Feb 26 11:36:00 crc kubenswrapper[4699]: I0226 11:36:00.159029 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535096-xr7rk" Feb 26 11:36:00 crc kubenswrapper[4699]: I0226 11:36:00.168635 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:36:00 crc kubenswrapper[4699]: I0226 11:36:00.169065 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:36:00 crc kubenswrapper[4699]: I0226 11:36:00.169293 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:36:00 crc kubenswrapper[4699]: I0226 11:36:00.180348 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535096-xr7rk"] Feb 26 11:36:00 crc kubenswrapper[4699]: I0226 11:36:00.288455 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzw9d\" (UniqueName: \"kubernetes.io/projected/6b65e61c-3853-4fd6-93c2-9d13c6776589-kube-api-access-lzw9d\") pod \"auto-csr-approver-29535096-xr7rk\" (UID: \"6b65e61c-3853-4fd6-93c2-9d13c6776589\") " pod="openshift-infra/auto-csr-approver-29535096-xr7rk" Feb 26 11:36:00 crc kubenswrapper[4699]: I0226 11:36:00.390690 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzw9d\" (UniqueName: \"kubernetes.io/projected/6b65e61c-3853-4fd6-93c2-9d13c6776589-kube-api-access-lzw9d\") pod \"auto-csr-approver-29535096-xr7rk\" (UID: \"6b65e61c-3853-4fd6-93c2-9d13c6776589\") " pod="openshift-infra/auto-csr-approver-29535096-xr7rk" Feb 26 11:36:00 crc kubenswrapper[4699]: I0226 11:36:00.875743 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzw9d\" (UniqueName: \"kubernetes.io/projected/6b65e61c-3853-4fd6-93c2-9d13c6776589-kube-api-access-lzw9d\") pod \"auto-csr-approver-29535096-xr7rk\" (UID: \"6b65e61c-3853-4fd6-93c2-9d13c6776589\") " pod="openshift-infra/auto-csr-approver-29535096-xr7rk" Feb 26 11:36:01 crc kubenswrapper[4699]: I0226 11:36:01.090731 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535096-xr7rk" Feb 26 11:36:01 crc kubenswrapper[4699]: W0226 11:36:01.567579 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b65e61c_3853_4fd6_93c2_9d13c6776589.slice/crio-e79e93ab54a6caa74fe83a7d4937a5bd6311e550b7ba931ab206e7d1f93c0ced WatchSource:0}: Error finding container e79e93ab54a6caa74fe83a7d4937a5bd6311e550b7ba931ab206e7d1f93c0ced: Status 404 returned error can't find the container with id e79e93ab54a6caa74fe83a7d4937a5bd6311e550b7ba931ab206e7d1f93c0ced Feb 26 11:36:01 crc kubenswrapper[4699]: I0226 11:36:01.584806 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535096-xr7rk"] Feb 26 11:36:02 crc kubenswrapper[4699]: I0226 11:36:02.217802 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535096-xr7rk" event={"ID":"6b65e61c-3853-4fd6-93c2-9d13c6776589","Type":"ContainerStarted","Data":"e79e93ab54a6caa74fe83a7d4937a5bd6311e550b7ba931ab206e7d1f93c0ced"} Feb 26 11:36:04 crc kubenswrapper[4699]: I0226 11:36:04.252771 4699 generic.go:334] "Generic (PLEG): container finished" podID="6b65e61c-3853-4fd6-93c2-9d13c6776589" containerID="dd9ce01dbb3d28e8559eda1261c169a7dbac7ba191f3aabd0c7a5d33511f3c12" exitCode=0 Feb 26 11:36:04 crc kubenswrapper[4699]: I0226 11:36:04.252866 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535096-xr7rk" event={"ID":"6b65e61c-3853-4fd6-93c2-9d13c6776589","Type":"ContainerDied","Data":"dd9ce01dbb3d28e8559eda1261c169a7dbac7ba191f3aabd0c7a5d33511f3c12"} Feb 26 11:36:05 crc kubenswrapper[4699]: I0226 11:36:05.597054 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535096-xr7rk" Feb 26 11:36:05 crc kubenswrapper[4699]: I0226 11:36:05.763724 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzw9d\" (UniqueName: \"kubernetes.io/projected/6b65e61c-3853-4fd6-93c2-9d13c6776589-kube-api-access-lzw9d\") pod \"6b65e61c-3853-4fd6-93c2-9d13c6776589\" (UID: \"6b65e61c-3853-4fd6-93c2-9d13c6776589\") " Feb 26 11:36:05 crc kubenswrapper[4699]: I0226 11:36:05.771743 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b65e61c-3853-4fd6-93c2-9d13c6776589-kube-api-access-lzw9d" (OuterVolumeSpecName: "kube-api-access-lzw9d") pod "6b65e61c-3853-4fd6-93c2-9d13c6776589" (UID: "6b65e61c-3853-4fd6-93c2-9d13c6776589"). InnerVolumeSpecName "kube-api-access-lzw9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:05 crc kubenswrapper[4699]: I0226 11:36:05.866593 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzw9d\" (UniqueName: \"kubernetes.io/projected/6b65e61c-3853-4fd6-93c2-9d13c6776589-kube-api-access-lzw9d\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:06 crc kubenswrapper[4699]: I0226 11:36:06.240568 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 11:36:06 crc kubenswrapper[4699]: I0226 11:36:06.287642 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535096-xr7rk" event={"ID":"6b65e61c-3853-4fd6-93c2-9d13c6776589","Type":"ContainerDied","Data":"e79e93ab54a6caa74fe83a7d4937a5bd6311e550b7ba931ab206e7d1f93c0ced"} Feb 26 11:36:06 crc kubenswrapper[4699]: I0226 11:36:06.287949 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e79e93ab54a6caa74fe83a7d4937a5bd6311e550b7ba931ab206e7d1f93c0ced" Feb 26 11:36:06 crc kubenswrapper[4699]: I0226 11:36:06.288202 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535096-xr7rk" Feb 26 11:36:06 crc kubenswrapper[4699]: I0226 11:36:06.401385 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:06 crc kubenswrapper[4699]: I0226 11:36:06.689629 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535090-7v44h"] Feb 26 11:36:06 crc kubenswrapper[4699]: I0226 11:36:06.710510 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535090-7v44h"] Feb 26 11:36:08 crc kubenswrapper[4699]: I0226 11:36:08.277912 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0d38a99-b56f-423c-9c5b-c8f726bf62f9" path="/var/lib/kubelet/pods/a0d38a99-b56f-423c-9c5b-c8f726bf62f9/volumes" Feb 26 11:36:15 crc kubenswrapper[4699]: I0226 11:36:15.548958 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:36:16 crc kubenswrapper[4699]: I0226 11:36:16.407290 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" event={"ID":"57bbec48-f33e-43b8-9f82-8cc3a42e7723","Type":"ContainerStarted","Data":"d3b1a1a717449801469d3bbcb93483dc2d3c83e649043f7dd4668fd3aea9c6fd"} Feb 26 11:36:17 crc kubenswrapper[4699]: I0226 11:36:17.442151 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" podStartSLOduration=2.947932414 podStartE2EDuration="41.442078763s" podCreationTimestamp="2026-02-26 11:35:36 +0000 UTC" firstStartedPulling="2026-02-26 11:35:37.050691479 +0000 UTC m=+1482.861517923" lastFinishedPulling="2026-02-26 11:36:15.544837838 +0000 UTC m=+1521.355664272" observedRunningTime="2026-02-26 11:36:17.436324757 +0000 UTC m=+1523.247151221" watchObservedRunningTime="2026-02-26 11:36:17.442078763 +0000 UTC m=+1523.252905197" Feb 26 11:36:19 crc kubenswrapper[4699]: I0226 11:36:19.437083 4699 generic.go:334] "Generic (PLEG): container finished" podID="095e0632-b9cc-4410-af45-249da70797aa" containerID="b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6" exitCode=0 Feb 26 11:36:19 crc kubenswrapper[4699]: I0226 11:36:19.437155 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b92vt" event={"ID":"095e0632-b9cc-4410-af45-249da70797aa","Type":"ContainerDied","Data":"b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6"} Feb 26 11:36:20 crc kubenswrapper[4699]: I0226 11:36:20.449542 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b92vt" event={"ID":"095e0632-b9cc-4410-af45-249da70797aa","Type":"ContainerStarted","Data":"12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99"} Feb 26 11:36:20 crc kubenswrapper[4699]: I0226 11:36:20.479273 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b92vt" podStartSLOduration=3.58520726 podStartE2EDuration="26.479245947s" podCreationTimestamp="2026-02-26 11:35:54 +0000 UTC" firstStartedPulling="2026-02-26 11:35:57.07659021 +0000 UTC m=+1502.887416644" lastFinishedPulling="2026-02-26 11:36:19.970628897 +0000 UTC m=+1525.781455331" observedRunningTime="2026-02-26 11:36:20.469461293 +0000 UTC m=+1526.280287737" watchObservedRunningTime="2026-02-26 11:36:20.479245947 +0000 UTC m=+1526.290072391" Feb 26 11:36:25 crc kubenswrapper[4699]: I0226 11:36:25.332978 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:36:25 crc kubenswrapper[4699]: I0226 11:36:25.333495 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:36:26 crc kubenswrapper[4699]: I0226 11:36:26.378748 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b92vt" podUID="095e0632-b9cc-4410-af45-249da70797aa" containerName="registry-server" probeResult="failure" output=< Feb 26 11:36:26 crc kubenswrapper[4699]: timeout: failed to connect service ":50051" within 1s Feb 26 11:36:26 crc kubenswrapper[4699]: > Feb 26 11:36:28 crc kubenswrapper[4699]: I0226 11:36:28.139528 4699 scope.go:117] "RemoveContainer" containerID="80050d8650124cdda213563d70066e26f43de8d356825ac23d9b4fdfcc1d3b22" Feb 26 11:36:28 crc kubenswrapper[4699]: I0226 11:36:28.168147 4699 scope.go:117] "RemoveContainer" containerID="02c1126ec0d166bfd6091e444f16da2788ee1d75f58864b8bc99a6f2547f9104" Feb 26 11:36:30 crc kubenswrapper[4699]: I0226 11:36:30.550724 4699 generic.go:334] "Generic (PLEG): container finished" podID="57bbec48-f33e-43b8-9f82-8cc3a42e7723" containerID="d3b1a1a717449801469d3bbcb93483dc2d3c83e649043f7dd4668fd3aea9c6fd" exitCode=0 Feb 26 11:36:30 crc kubenswrapper[4699]: I0226 11:36:30.550848 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" event={"ID":"57bbec48-f33e-43b8-9f82-8cc3a42e7723","Type":"ContainerDied","Data":"d3b1a1a717449801469d3bbcb93483dc2d3c83e649043f7dd4668fd3aea9c6fd"} Feb 26 11:36:31 crc kubenswrapper[4699]: I0226 11:36:31.946368 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.002774 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-inventory\") pod \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.003001 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfm59\" (UniqueName: \"kubernetes.io/projected/57bbec48-f33e-43b8-9f82-8cc3a42e7723-kube-api-access-kfm59\") pod \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.003030 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-repo-setup-combined-ca-bundle\") pod \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.003165 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-ssh-key-openstack-edpm-ipam\") pod \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\" (UID: \"57bbec48-f33e-43b8-9f82-8cc3a42e7723\") " Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.008737 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "57bbec48-f33e-43b8-9f82-8cc3a42e7723" (UID: "57bbec48-f33e-43b8-9f82-8cc3a42e7723"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.008964 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57bbec48-f33e-43b8-9f82-8cc3a42e7723-kube-api-access-kfm59" (OuterVolumeSpecName: "kube-api-access-kfm59") pod "57bbec48-f33e-43b8-9f82-8cc3a42e7723" (UID: "57bbec48-f33e-43b8-9f82-8cc3a42e7723"). InnerVolumeSpecName "kube-api-access-kfm59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.035569 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-inventory" (OuterVolumeSpecName: "inventory") pod "57bbec48-f33e-43b8-9f82-8cc3a42e7723" (UID: "57bbec48-f33e-43b8-9f82-8cc3a42e7723"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.037906 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "57bbec48-f33e-43b8-9f82-8cc3a42e7723" (UID: "57bbec48-f33e-43b8-9f82-8cc3a42e7723"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.105042 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.105075 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.105084 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfm59\" (UniqueName: \"kubernetes.io/projected/57bbec48-f33e-43b8-9f82-8cc3a42e7723-kube-api-access-kfm59\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.105092 4699 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bbec48-f33e-43b8-9f82-8cc3a42e7723-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.571087 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" event={"ID":"57bbec48-f33e-43b8-9f82-8cc3a42e7723","Type":"ContainerDied","Data":"af6016a31142a78e40da5360c3d498d8cf13c0803705344a16985003cec0582e"} Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.571400 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af6016a31142a78e40da5360c3d498d8cf13c0803705344a16985003cec0582e" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.571183 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.655274 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z"] Feb 26 11:36:32 crc kubenswrapper[4699]: E0226 11:36:32.655807 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57bbec48-f33e-43b8-9f82-8cc3a42e7723" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.655835 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="57bbec48-f33e-43b8-9f82-8cc3a42e7723" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 11:36:32 crc kubenswrapper[4699]: E0226 11:36:32.655854 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b65e61c-3853-4fd6-93c2-9d13c6776589" containerName="oc" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.655863 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b65e61c-3853-4fd6-93c2-9d13c6776589" containerName="oc" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.656140 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="57bbec48-f33e-43b8-9f82-8cc3a42e7723" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.656166 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b65e61c-3853-4fd6-93c2-9d13c6776589" containerName="oc" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.656913 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.658894 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.659991 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.660151 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.662269 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.664760 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z"] Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.716096 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zdf2z\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.716338 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zdf2z\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.716370 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbnnk\" (UniqueName: \"kubernetes.io/projected/fcea0fcf-0c80-4334-9327-f0a57b385cc9-kube-api-access-cbnnk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zdf2z\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.818258 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zdf2z\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.818383 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zdf2z\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.818413 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbnnk\" (UniqueName: \"kubernetes.io/projected/fcea0fcf-0c80-4334-9327-f0a57b385cc9-kube-api-access-cbnnk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zdf2z\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.822630 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zdf2z\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.822676 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zdf2z\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.835572 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbnnk\" (UniqueName: \"kubernetes.io/projected/fcea0fcf-0c80-4334-9327-f0a57b385cc9-kube-api-access-cbnnk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zdf2z\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:32 crc kubenswrapper[4699]: I0226 11:36:32.971656 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:33 crc kubenswrapper[4699]: I0226 11:36:33.506338 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z"] Feb 26 11:36:33 crc kubenswrapper[4699]: I0226 11:36:33.581227 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" event={"ID":"fcea0fcf-0c80-4334-9327-f0a57b385cc9","Type":"ContainerStarted","Data":"eb9cf19f3dcfcec0226f2b1b4e3eeb146f04cb508c317c36f5c63ab6d203f2d3"} Feb 26 11:36:34 crc kubenswrapper[4699]: I0226 11:36:34.591470 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" event={"ID":"fcea0fcf-0c80-4334-9327-f0a57b385cc9","Type":"ContainerStarted","Data":"d1583c97a6f8ed5901159ae8fbbdacf36c4ff0c48237ee8801ed6a7b20f80324"} Feb 26 11:36:34 crc kubenswrapper[4699]: I0226 11:36:34.613588 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" podStartSLOduration=2.164491996 podStartE2EDuration="2.613563851s" podCreationTimestamp="2026-02-26 11:36:32 +0000 UTC" firstStartedPulling="2026-02-26 11:36:33.51113727 +0000 UTC m=+1539.321963714" lastFinishedPulling="2026-02-26 11:36:33.960209125 +0000 UTC m=+1539.771035569" observedRunningTime="2026-02-26 11:36:34.604721044 +0000 UTC m=+1540.415547478" watchObservedRunningTime="2026-02-26 11:36:34.613563851 +0000 UTC m=+1540.424390285" Feb 26 11:36:35 crc kubenswrapper[4699]: I0226 11:36:35.381526 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:36:35 crc kubenswrapper[4699]: I0226 11:36:35.432792 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:36:35 crc kubenswrapper[4699]: I0226 11:36:35.652850 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b92vt"] Feb 26 11:36:36 crc kubenswrapper[4699]: I0226 11:36:36.609203 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b92vt" podUID="095e0632-b9cc-4410-af45-249da70797aa" containerName="registry-server" containerID="cri-o://12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99" gracePeriod=2 Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.129057 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.200717 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-catalog-content\") pod \"095e0632-b9cc-4410-af45-249da70797aa\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.200795 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-utilities\") pod \"095e0632-b9cc-4410-af45-249da70797aa\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.200864 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdc5p\" (UniqueName: \"kubernetes.io/projected/095e0632-b9cc-4410-af45-249da70797aa-kube-api-access-fdc5p\") pod \"095e0632-b9cc-4410-af45-249da70797aa\" (UID: \"095e0632-b9cc-4410-af45-249da70797aa\") " Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.202075 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-utilities" (OuterVolumeSpecName: "utilities") pod "095e0632-b9cc-4410-af45-249da70797aa" (UID: "095e0632-b9cc-4410-af45-249da70797aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.207241 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095e0632-b9cc-4410-af45-249da70797aa-kube-api-access-fdc5p" (OuterVolumeSpecName: "kube-api-access-fdc5p") pod "095e0632-b9cc-4410-af45-249da70797aa" (UID: "095e0632-b9cc-4410-af45-249da70797aa"). InnerVolumeSpecName "kube-api-access-fdc5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.303377 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.303413 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdc5p\" (UniqueName: \"kubernetes.io/projected/095e0632-b9cc-4410-af45-249da70797aa-kube-api-access-fdc5p\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.331289 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "095e0632-b9cc-4410-af45-249da70797aa" (UID: "095e0632-b9cc-4410-af45-249da70797aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.405574 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095e0632-b9cc-4410-af45-249da70797aa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.624423 4699 generic.go:334] "Generic (PLEG): container finished" podID="fcea0fcf-0c80-4334-9327-f0a57b385cc9" containerID="d1583c97a6f8ed5901159ae8fbbdacf36c4ff0c48237ee8801ed6a7b20f80324" exitCode=0 Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.624481 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" event={"ID":"fcea0fcf-0c80-4334-9327-f0a57b385cc9","Type":"ContainerDied","Data":"d1583c97a6f8ed5901159ae8fbbdacf36c4ff0c48237ee8801ed6a7b20f80324"} Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.628281 4699 generic.go:334] "Generic (PLEG): container finished" podID="095e0632-b9cc-4410-af45-249da70797aa" containerID="12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99" exitCode=0 Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.628346 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b92vt" event={"ID":"095e0632-b9cc-4410-af45-249da70797aa","Type":"ContainerDied","Data":"12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99"} Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.628369 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b92vt" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.628393 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b92vt" event={"ID":"095e0632-b9cc-4410-af45-249da70797aa","Type":"ContainerDied","Data":"d12ea0f251bc41e4b956605602d54f047da25af921010667a43f8d590bf06d61"} Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.628420 4699 scope.go:117] "RemoveContainer" containerID="12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.661049 4699 scope.go:117] "RemoveContainer" containerID="b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.665633 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b92vt"] Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.681999 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b92vt"] Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.689317 4699 scope.go:117] "RemoveContainer" containerID="a412e1611c3e8a9530b03f1738de0d8137ca46fac15c0f749599ab1aabddd3c3" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.726368 4699 scope.go:117] "RemoveContainer" containerID="12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99" Feb 26 11:36:37 crc kubenswrapper[4699]: E0226 11:36:37.726940 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99\": container with ID starting with 12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99 not found: ID does not exist" containerID="12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.726983 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99"} err="failed to get container status \"12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99\": rpc error: code = NotFound desc = could not find container \"12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99\": container with ID starting with 12ebf9a305b3675fffc94fa24d3e61874d69efaed6066d44344171bd74ba7c99 not found: ID does not exist" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.727010 4699 scope.go:117] "RemoveContainer" containerID="b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6" Feb 26 11:36:37 crc kubenswrapper[4699]: E0226 11:36:37.727332 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6\": container with ID starting with b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6 not found: ID does not exist" containerID="b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.727367 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6"} err="failed to get container status \"b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6\": rpc error: code = NotFound desc = could not find container \"b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6\": container with ID starting with b30d4966bced026f239d19e3ba2096688e67d934ba8c4fdf464484f205dad9f6 not found: ID does not exist" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.727381 4699 scope.go:117] "RemoveContainer" containerID="a412e1611c3e8a9530b03f1738de0d8137ca46fac15c0f749599ab1aabddd3c3" Feb 26 11:36:37 crc kubenswrapper[4699]: E0226 11:36:37.727612 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a412e1611c3e8a9530b03f1738de0d8137ca46fac15c0f749599ab1aabddd3c3\": container with ID starting with a412e1611c3e8a9530b03f1738de0d8137ca46fac15c0f749599ab1aabddd3c3 not found: ID does not exist" containerID="a412e1611c3e8a9530b03f1738de0d8137ca46fac15c0f749599ab1aabddd3c3" Feb 26 11:36:37 crc kubenswrapper[4699]: I0226 11:36:37.727628 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a412e1611c3e8a9530b03f1738de0d8137ca46fac15c0f749599ab1aabddd3c3"} err="failed to get container status \"a412e1611c3e8a9530b03f1738de0d8137ca46fac15c0f749599ab1aabddd3c3\": rpc error: code = NotFound desc = could not find container \"a412e1611c3e8a9530b03f1738de0d8137ca46fac15c0f749599ab1aabddd3c3\": container with ID starting with a412e1611c3e8a9530b03f1738de0d8137ca46fac15c0f749599ab1aabddd3c3 not found: ID does not exist" Feb 26 11:36:38 crc kubenswrapper[4699]: I0226 11:36:38.271251 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="095e0632-b9cc-4410-af45-249da70797aa" path="/var/lib/kubelet/pods/095e0632-b9cc-4410-af45-249da70797aa/volumes" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.032064 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.137024 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-inventory\") pod \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.137509 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbnnk\" (UniqueName: \"kubernetes.io/projected/fcea0fcf-0c80-4334-9327-f0a57b385cc9-kube-api-access-cbnnk\") pod \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.137539 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-ssh-key-openstack-edpm-ipam\") pod \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\" (UID: \"fcea0fcf-0c80-4334-9327-f0a57b385cc9\") " Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.142686 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcea0fcf-0c80-4334-9327-f0a57b385cc9-kube-api-access-cbnnk" (OuterVolumeSpecName: "kube-api-access-cbnnk") pod "fcea0fcf-0c80-4334-9327-f0a57b385cc9" (UID: "fcea0fcf-0c80-4334-9327-f0a57b385cc9"). InnerVolumeSpecName "kube-api-access-cbnnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.163837 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-inventory" (OuterVolumeSpecName: "inventory") pod "fcea0fcf-0c80-4334-9327-f0a57b385cc9" (UID: "fcea0fcf-0c80-4334-9327-f0a57b385cc9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.172376 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fcea0fcf-0c80-4334-9327-f0a57b385cc9" (UID: "fcea0fcf-0c80-4334-9327-f0a57b385cc9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.240130 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.240165 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbnnk\" (UniqueName: \"kubernetes.io/projected/fcea0fcf-0c80-4334-9327-f0a57b385cc9-kube-api-access-cbnnk\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.240178 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcea0fcf-0c80-4334-9327-f0a57b385cc9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.647152 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" event={"ID":"fcea0fcf-0c80-4334-9327-f0a57b385cc9","Type":"ContainerDied","Data":"eb9cf19f3dcfcec0226f2b1b4e3eeb146f04cb508c317c36f5c63ab6d203f2d3"} Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.647196 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb9cf19f3dcfcec0226f2b1b4e3eeb146f04cb508c317c36f5c63ab6d203f2d3" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.647260 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zdf2z" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.706054 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj"] Feb 26 11:36:39 crc kubenswrapper[4699]: E0226 11:36:39.706580 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095e0632-b9cc-4410-af45-249da70797aa" containerName="extract-utilities" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.706603 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="095e0632-b9cc-4410-af45-249da70797aa" containerName="extract-utilities" Feb 26 11:36:39 crc kubenswrapper[4699]: E0226 11:36:39.706617 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcea0fcf-0c80-4334-9327-f0a57b385cc9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.706628 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcea0fcf-0c80-4334-9327-f0a57b385cc9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 11:36:39 crc kubenswrapper[4699]: E0226 11:36:39.706647 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095e0632-b9cc-4410-af45-249da70797aa" containerName="registry-server" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.706654 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="095e0632-b9cc-4410-af45-249da70797aa" containerName="registry-server" Feb 26 11:36:39 crc kubenswrapper[4699]: E0226 11:36:39.706673 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095e0632-b9cc-4410-af45-249da70797aa" containerName="extract-content" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.706681 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="095e0632-b9cc-4410-af45-249da70797aa" containerName="extract-content" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.706935 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcea0fcf-0c80-4334-9327-f0a57b385cc9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.706963 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="095e0632-b9cc-4410-af45-249da70797aa" containerName="registry-server" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.707781 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.710930 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.711864 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.711955 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.712169 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.722815 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj"] Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.753920 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw4lj\" (UniqueName: \"kubernetes.io/projected/fee4a36b-0896-43c1-9b23-3da3ae870cbe-kube-api-access-rw4lj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.753983 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.754094 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.754141 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.856377 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.856444 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.856562 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw4lj\" (UniqueName: \"kubernetes.io/projected/fee4a36b-0896-43c1-9b23-3da3ae870cbe-kube-api-access-rw4lj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.856589 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.860295 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.866718 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.873074 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw4lj\" (UniqueName: \"kubernetes.io/projected/fee4a36b-0896-43c1-9b23-3da3ae870cbe-kube-api-access-rw4lj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:39 crc kubenswrapper[4699]: I0226 11:36:39.873398 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:40 crc kubenswrapper[4699]: I0226 11:36:40.028249 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:36:40 crc kubenswrapper[4699]: I0226 11:36:40.528877 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj"] Feb 26 11:36:40 crc kubenswrapper[4699]: I0226 11:36:40.657233 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" event={"ID":"fee4a36b-0896-43c1-9b23-3da3ae870cbe","Type":"ContainerStarted","Data":"80e4ef9ff110025ca1ad9e0b0c1c51b00737c757e4cae0d1f58cc0b932613fd9"} Feb 26 11:36:41 crc kubenswrapper[4699]: I0226 11:36:41.585641 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:36:41 crc kubenswrapper[4699]: I0226 11:36:41.586040 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:36:41 crc kubenswrapper[4699]: I0226 11:36:41.677424 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" event={"ID":"fee4a36b-0896-43c1-9b23-3da3ae870cbe","Type":"ContainerStarted","Data":"9da4d0e1b71f7b3bc90f317d243afef6ee0b2480495e8fa6f0ce050f027878f5"} Feb 26 11:36:41 crc kubenswrapper[4699]: I0226 11:36:41.697929 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" podStartSLOduration=2.2693969210000002 podStartE2EDuration="2.69790372s" podCreationTimestamp="2026-02-26 11:36:39 +0000 UTC" firstStartedPulling="2026-02-26 11:36:40.530693931 +0000 UTC m=+1546.341520365" lastFinishedPulling="2026-02-26 11:36:40.95920073 +0000 UTC m=+1546.770027164" observedRunningTime="2026-02-26 11:36:41.694334846 +0000 UTC m=+1547.505161280" watchObservedRunningTime="2026-02-26 11:36:41.69790372 +0000 UTC m=+1547.508730174" Feb 26 11:37:11 crc kubenswrapper[4699]: I0226 11:37:11.584929 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:37:11 crc kubenswrapper[4699]: I0226 11:37:11.585519 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:37:28 crc kubenswrapper[4699]: I0226 11:37:28.297665 4699 scope.go:117] "RemoveContainer" containerID="dad7fa90e67d3f965c26f7c4abb45503a74b01c5861c388e8b2b6571901121e5" Feb 26 11:37:28 crc kubenswrapper[4699]: I0226 11:37:28.334730 4699 scope.go:117] "RemoveContainer" containerID="c6b236ca3c3f327dbd547c137704ae3085c07d33a8a0f68103faaa60a3289bc1" Feb 26 11:37:28 crc kubenswrapper[4699]: I0226 11:37:28.391180 4699 scope.go:117] "RemoveContainer" containerID="5822866374c533954891aab83b4e82e6518ecfafe343985ba49ddc3abdfd00dc" Feb 26 11:37:28 crc kubenswrapper[4699]: I0226 11:37:28.489384 4699 scope.go:117] "RemoveContainer" containerID="3fc8431c0d9189816a6d87bbbf1bde79cfcb29458f69200822c417c75941073b" Feb 26 11:37:41 crc kubenswrapper[4699]: I0226 11:37:41.715521 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:37:41 crc kubenswrapper[4699]: I0226 11:37:41.716167 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:37:41 crc kubenswrapper[4699]: I0226 11:37:41.716221 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:37:41 crc kubenswrapper[4699]: I0226 11:37:41.716825 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:37:41 crc kubenswrapper[4699]: I0226 11:37:41.716882 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" gracePeriod=600 Feb 26 11:37:41 crc kubenswrapper[4699]: E0226 11:37:41.848243 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:37:42 crc kubenswrapper[4699]: I0226 11:37:42.820637 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" exitCode=0 Feb 26 11:37:42 crc kubenswrapper[4699]: I0226 11:37:42.820694 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99"} Feb 26 11:37:42 crc kubenswrapper[4699]: I0226 11:37:42.820745 4699 scope.go:117] "RemoveContainer" containerID="e281597aa593fa5c9ddd67a617de4ed4d3363a8c5b9ebcaaf78cd70cd013eef6" Feb 26 11:37:42 crc kubenswrapper[4699]: I0226 11:37:42.821497 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:37:42 crc kubenswrapper[4699]: E0226 11:37:42.821916 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:37:57 crc kubenswrapper[4699]: I0226 11:37:57.262501 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:37:57 crc kubenswrapper[4699]: E0226 11:37:57.263196 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:38:00 crc kubenswrapper[4699]: I0226 11:38:00.165395 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535098-km5z4"] Feb 26 11:38:00 crc kubenswrapper[4699]: I0226 11:38:00.167964 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535098-km5z4" Feb 26 11:38:00 crc kubenswrapper[4699]: I0226 11:38:00.172696 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:38:00 crc kubenswrapper[4699]: I0226 11:38:00.172752 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:38:00 crc kubenswrapper[4699]: I0226 11:38:00.172864 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:38:00 crc kubenswrapper[4699]: I0226 11:38:00.179973 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535098-km5z4"] Feb 26 11:38:00 crc kubenswrapper[4699]: I0226 11:38:00.306903 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28hls\" (UniqueName: \"kubernetes.io/projected/54818b28-fa0f-4021-9dc0-57f3186f3e64-kube-api-access-28hls\") pod \"auto-csr-approver-29535098-km5z4\" (UID: \"54818b28-fa0f-4021-9dc0-57f3186f3e64\") " pod="openshift-infra/auto-csr-approver-29535098-km5z4" Feb 26 11:38:00 crc kubenswrapper[4699]: I0226 11:38:00.408499 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28hls\" (UniqueName: \"kubernetes.io/projected/54818b28-fa0f-4021-9dc0-57f3186f3e64-kube-api-access-28hls\") pod \"auto-csr-approver-29535098-km5z4\" (UID: \"54818b28-fa0f-4021-9dc0-57f3186f3e64\") " pod="openshift-infra/auto-csr-approver-29535098-km5z4" Feb 26 11:38:00 crc kubenswrapper[4699]: I0226 11:38:00.427066 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28hls\" (UniqueName: \"kubernetes.io/projected/54818b28-fa0f-4021-9dc0-57f3186f3e64-kube-api-access-28hls\") pod \"auto-csr-approver-29535098-km5z4\" (UID: \"54818b28-fa0f-4021-9dc0-57f3186f3e64\") " pod="openshift-infra/auto-csr-approver-29535098-km5z4" Feb 26 11:38:00 crc kubenswrapper[4699]: I0226 11:38:00.500841 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535098-km5z4" Feb 26 11:38:01 crc kubenswrapper[4699]: I0226 11:38:01.037083 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535098-km5z4"] Feb 26 11:38:01 crc kubenswrapper[4699]: I0226 11:38:01.147978 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535098-km5z4" event={"ID":"54818b28-fa0f-4021-9dc0-57f3186f3e64","Type":"ContainerStarted","Data":"b6ba6bc53e2c20bd1bc843bf1c53f22c7fb2f19628fb3de77d156539c6b892f1"} Feb 26 11:38:04 crc kubenswrapper[4699]: I0226 11:38:04.212783 4699 generic.go:334] "Generic (PLEG): container finished" podID="54818b28-fa0f-4021-9dc0-57f3186f3e64" containerID="1b1986eede2e3874e8730ee539f7fe36f87c4471b7b1fdf2129756beebd0a599" exitCode=0 Feb 26 11:38:04 crc kubenswrapper[4699]: I0226 11:38:04.212928 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535098-km5z4" event={"ID":"54818b28-fa0f-4021-9dc0-57f3186f3e64","Type":"ContainerDied","Data":"1b1986eede2e3874e8730ee539f7fe36f87c4471b7b1fdf2129756beebd0a599"} Feb 26 11:38:05 crc kubenswrapper[4699]: I0226 11:38:05.755054 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535098-km5z4" Feb 26 11:38:05 crc kubenswrapper[4699]: I0226 11:38:05.909571 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28hls\" (UniqueName: \"kubernetes.io/projected/54818b28-fa0f-4021-9dc0-57f3186f3e64-kube-api-access-28hls\") pod \"54818b28-fa0f-4021-9dc0-57f3186f3e64\" (UID: \"54818b28-fa0f-4021-9dc0-57f3186f3e64\") " Feb 26 11:38:05 crc kubenswrapper[4699]: I0226 11:38:05.915149 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54818b28-fa0f-4021-9dc0-57f3186f3e64-kube-api-access-28hls" (OuterVolumeSpecName: "kube-api-access-28hls") pod "54818b28-fa0f-4021-9dc0-57f3186f3e64" (UID: "54818b28-fa0f-4021-9dc0-57f3186f3e64"). InnerVolumeSpecName "kube-api-access-28hls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:38:06 crc kubenswrapper[4699]: I0226 11:38:06.079636 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28hls\" (UniqueName: \"kubernetes.io/projected/54818b28-fa0f-4021-9dc0-57f3186f3e64-kube-api-access-28hls\") on node \"crc\" DevicePath \"\"" Feb 26 11:38:06 crc kubenswrapper[4699]: I0226 11:38:06.232709 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535098-km5z4" event={"ID":"54818b28-fa0f-4021-9dc0-57f3186f3e64","Type":"ContainerDied","Data":"b6ba6bc53e2c20bd1bc843bf1c53f22c7fb2f19628fb3de77d156539c6b892f1"} Feb 26 11:38:06 crc kubenswrapper[4699]: I0226 11:38:06.232748 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6ba6bc53e2c20bd1bc843bf1c53f22c7fb2f19628fb3de77d156539c6b892f1" Feb 26 11:38:06 crc kubenswrapper[4699]: I0226 11:38:06.232815 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535098-km5z4" Feb 26 11:38:06 crc kubenswrapper[4699]: I0226 11:38:06.830763 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535092-t7q4h"] Feb 26 11:38:06 crc kubenswrapper[4699]: I0226 11:38:06.839511 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535092-t7q4h"] Feb 26 11:38:08 crc kubenswrapper[4699]: I0226 11:38:08.401590 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="343bb829-035d-4834-a0c4-d9a61c11a2ee" path="/var/lib/kubelet/pods/343bb829-035d-4834-a0c4-d9a61c11a2ee/volumes" Feb 26 11:38:10 crc kubenswrapper[4699]: I0226 11:38:10.261491 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:38:10 crc kubenswrapper[4699]: E0226 11:38:10.261794 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:38:25 crc kubenswrapper[4699]: I0226 11:38:25.261233 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:38:25 crc kubenswrapper[4699]: E0226 11:38:25.262047 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:38:28 crc kubenswrapper[4699]: I0226 11:38:28.712621 4699 scope.go:117] "RemoveContainer" containerID="f6df4021899217dba2f01191869995ca628d93d69016b474ac26db11ce7351f9" Feb 26 11:38:28 crc kubenswrapper[4699]: I0226 11:38:28.736980 4699 scope.go:117] "RemoveContainer" containerID="3c60b289616323cd6352bf0b5554d4a5d5ee327ffbb6b71e27e82bb85958f651" Feb 26 11:38:28 crc kubenswrapper[4699]: I0226 11:38:28.758754 4699 scope.go:117] "RemoveContainer" containerID="f2cdecc6eba8599d08f98abb877e3708c955cb03d406931c6fd1ea5f2ab28e98" Feb 26 11:38:39 crc kubenswrapper[4699]: I0226 11:38:39.260729 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:38:39 crc kubenswrapper[4699]: E0226 11:38:39.261580 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:38:52 crc kubenswrapper[4699]: I0226 11:38:52.260978 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:38:52 crc kubenswrapper[4699]: E0226 11:38:52.261897 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.302000 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wngln"] Feb 26 11:38:55 crc kubenswrapper[4699]: E0226 11:38:55.302752 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54818b28-fa0f-4021-9dc0-57f3186f3e64" containerName="oc" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.302776 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="54818b28-fa0f-4021-9dc0-57f3186f3e64" containerName="oc" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.302991 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="54818b28-fa0f-4021-9dc0-57f3186f3e64" containerName="oc" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.304698 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wngln" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.319720 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wngln"] Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.400139 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-utilities\") pod \"community-operators-wngln\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " pod="openshift-marketplace/community-operators-wngln" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.400296 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn72s\" (UniqueName: \"kubernetes.io/projected/00de79e0-b495-44ac-ac69-461dae5cfcea-kube-api-access-sn72s\") pod \"community-operators-wngln\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " pod="openshift-marketplace/community-operators-wngln" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.400368 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-catalog-content\") pod \"community-operators-wngln\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " pod="openshift-marketplace/community-operators-wngln" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.501679 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-catalog-content\") pod \"community-operators-wngln\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " pod="openshift-marketplace/community-operators-wngln" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.501813 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-utilities\") pod \"community-operators-wngln\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " pod="openshift-marketplace/community-operators-wngln" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.501941 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn72s\" (UniqueName: \"kubernetes.io/projected/00de79e0-b495-44ac-ac69-461dae5cfcea-kube-api-access-sn72s\") pod \"community-operators-wngln\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " pod="openshift-marketplace/community-operators-wngln" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.502419 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-catalog-content\") pod \"community-operators-wngln\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " pod="openshift-marketplace/community-operators-wngln" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.502773 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-utilities\") pod \"community-operators-wngln\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " pod="openshift-marketplace/community-operators-wngln" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.527228 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn72s\" (UniqueName: \"kubernetes.io/projected/00de79e0-b495-44ac-ac69-461dae5cfcea-kube-api-access-sn72s\") pod \"community-operators-wngln\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " pod="openshift-marketplace/community-operators-wngln" Feb 26 11:38:55 crc kubenswrapper[4699]: I0226 11:38:55.628100 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wngln" Feb 26 11:38:56 crc kubenswrapper[4699]: I0226 11:38:56.141841 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wngln"] Feb 26 11:38:56 crc kubenswrapper[4699]: I0226 11:38:56.639057 4699 generic.go:334] "Generic (PLEG): container finished" podID="00de79e0-b495-44ac-ac69-461dae5cfcea" containerID="1140853f450f2ac7dc6b93a4d99c33ede9a8bdc36ddcd35c196e7946c551da81" exitCode=0 Feb 26 11:38:56 crc kubenswrapper[4699]: I0226 11:38:56.639394 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wngln" event={"ID":"00de79e0-b495-44ac-ac69-461dae5cfcea","Type":"ContainerDied","Data":"1140853f450f2ac7dc6b93a4d99c33ede9a8bdc36ddcd35c196e7946c551da81"} Feb 26 11:38:56 crc kubenswrapper[4699]: I0226 11:38:56.639475 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wngln" event={"ID":"00de79e0-b495-44ac-ac69-461dae5cfcea","Type":"ContainerStarted","Data":"ea93986519d68d2ee2fa0d6490c8ca1431cde9b600c5babfa220cd098cbce583"} Feb 26 11:38:58 crc kubenswrapper[4699]: I0226 11:38:58.659834 4699 generic.go:334] "Generic (PLEG): container finished" podID="00de79e0-b495-44ac-ac69-461dae5cfcea" containerID="55e33d9e4a2f48076bebaa29f774aa42b701e2bb71b502fe33614a88e47091ff" exitCode=0 Feb 26 11:38:58 crc kubenswrapper[4699]: I0226 11:38:58.659939 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wngln" event={"ID":"00de79e0-b495-44ac-ac69-461dae5cfcea","Type":"ContainerDied","Data":"55e33d9e4a2f48076bebaa29f774aa42b701e2bb71b502fe33614a88e47091ff"} Feb 26 11:39:01 crc kubenswrapper[4699]: I0226 11:39:01.694946 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wngln" event={"ID":"00de79e0-b495-44ac-ac69-461dae5cfcea","Type":"ContainerStarted","Data":"cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef"} Feb 26 11:39:01 crc kubenswrapper[4699]: I0226 11:39:01.720740 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wngln" podStartSLOduration=2.540242452 podStartE2EDuration="6.720682741s" podCreationTimestamp="2026-02-26 11:38:55 +0000 UTC" firstStartedPulling="2026-02-26 11:38:56.641870717 +0000 UTC m=+1682.452697151" lastFinishedPulling="2026-02-26 11:39:00.822311006 +0000 UTC m=+1686.633137440" observedRunningTime="2026-02-26 11:39:01.714576287 +0000 UTC m=+1687.525402751" watchObservedRunningTime="2026-02-26 11:39:01.720682741 +0000 UTC m=+1687.531509175" Feb 26 11:39:05 crc kubenswrapper[4699]: I0226 11:39:05.628907 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wngln" Feb 26 11:39:05 crc kubenswrapper[4699]: I0226 11:39:05.629585 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wngln" Feb 26 11:39:05 crc kubenswrapper[4699]: I0226 11:39:05.680315 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wngln" Feb 26 11:39:07 crc kubenswrapper[4699]: I0226 11:39:07.261438 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:39:07 crc kubenswrapper[4699]: E0226 11:39:07.261823 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:39:15 crc kubenswrapper[4699]: I0226 11:39:15.682579 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wngln" Feb 26 11:39:15 crc kubenswrapper[4699]: I0226 11:39:15.749159 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wngln"] Feb 26 11:39:16 crc kubenswrapper[4699]: I0226 11:39:16.037579 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wngln" podUID="00de79e0-b495-44ac-ac69-461dae5cfcea" containerName="registry-server" containerID="cri-o://cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef" gracePeriod=2 Feb 26 11:39:16 crc kubenswrapper[4699]: I0226 11:39:16.501024 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wngln" Feb 26 11:39:16 crc kubenswrapper[4699]: I0226 11:39:16.671646 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn72s\" (UniqueName: \"kubernetes.io/projected/00de79e0-b495-44ac-ac69-461dae5cfcea-kube-api-access-sn72s\") pod \"00de79e0-b495-44ac-ac69-461dae5cfcea\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " Feb 26 11:39:16 crc kubenswrapper[4699]: I0226 11:39:16.671759 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-utilities\") pod \"00de79e0-b495-44ac-ac69-461dae5cfcea\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " Feb 26 11:39:16 crc kubenswrapper[4699]: I0226 11:39:16.671874 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-catalog-content\") pod \"00de79e0-b495-44ac-ac69-461dae5cfcea\" (UID: \"00de79e0-b495-44ac-ac69-461dae5cfcea\") " Feb 26 11:39:16 crc kubenswrapper[4699]: I0226 11:39:16.673184 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-utilities" (OuterVolumeSpecName: "utilities") pod "00de79e0-b495-44ac-ac69-461dae5cfcea" (UID: "00de79e0-b495-44ac-ac69-461dae5cfcea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:39:16 crc kubenswrapper[4699]: I0226 11:39:16.678888 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00de79e0-b495-44ac-ac69-461dae5cfcea-kube-api-access-sn72s" (OuterVolumeSpecName: "kube-api-access-sn72s") pod "00de79e0-b495-44ac-ac69-461dae5cfcea" (UID: "00de79e0-b495-44ac-ac69-461dae5cfcea"). InnerVolumeSpecName "kube-api-access-sn72s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:39:16 crc kubenswrapper[4699]: I0226 11:39:16.739556 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00de79e0-b495-44ac-ac69-461dae5cfcea" (UID: "00de79e0-b495-44ac-ac69-461dae5cfcea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:39:16 crc kubenswrapper[4699]: I0226 11:39:16.774709 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:16 crc kubenswrapper[4699]: I0226 11:39:16.774761 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00de79e0-b495-44ac-ac69-461dae5cfcea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:16 crc kubenswrapper[4699]: I0226 11:39:16.774783 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn72s\" (UniqueName: \"kubernetes.io/projected/00de79e0-b495-44ac-ac69-461dae5cfcea-kube-api-access-sn72s\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.049179 4699 generic.go:334] "Generic (PLEG): container finished" podID="00de79e0-b495-44ac-ac69-461dae5cfcea" containerID="cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef" exitCode=0 Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.049238 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wngln" event={"ID":"00de79e0-b495-44ac-ac69-461dae5cfcea","Type":"ContainerDied","Data":"cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef"} Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.049274 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wngln" event={"ID":"00de79e0-b495-44ac-ac69-461dae5cfcea","Type":"ContainerDied","Data":"ea93986519d68d2ee2fa0d6490c8ca1431cde9b600c5babfa220cd098cbce583"} Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.049323 4699 scope.go:117] "RemoveContainer" containerID="cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef" Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.049326 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wngln" Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.075418 4699 scope.go:117] "RemoveContainer" containerID="55e33d9e4a2f48076bebaa29f774aa42b701e2bb71b502fe33614a88e47091ff" Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.102176 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wngln"] Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.114838 4699 scope.go:117] "RemoveContainer" containerID="1140853f450f2ac7dc6b93a4d99c33ede9a8bdc36ddcd35c196e7946c551da81" Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.138365 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wngln"] Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.153179 4699 scope.go:117] "RemoveContainer" containerID="cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef" Feb 26 11:39:17 crc kubenswrapper[4699]: E0226 11:39:17.153932 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef\": container with ID starting with cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef not found: ID does not exist" containerID="cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef" Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.153964 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef"} err="failed to get container status \"cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef\": rpc error: code = NotFound desc = could not find container \"cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef\": container with ID starting with cda5dec024a6a906207b193d321e8bcde22d54b3fabfbb5a1562fbd3f13a94ef not found: ID does not exist" Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.153985 4699 scope.go:117] "RemoveContainer" containerID="55e33d9e4a2f48076bebaa29f774aa42b701e2bb71b502fe33614a88e47091ff" Feb 26 11:39:17 crc kubenswrapper[4699]: E0226 11:39:17.154559 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e33d9e4a2f48076bebaa29f774aa42b701e2bb71b502fe33614a88e47091ff\": container with ID starting with 55e33d9e4a2f48076bebaa29f774aa42b701e2bb71b502fe33614a88e47091ff not found: ID does not exist" containerID="55e33d9e4a2f48076bebaa29f774aa42b701e2bb71b502fe33614a88e47091ff" Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.154581 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e33d9e4a2f48076bebaa29f774aa42b701e2bb71b502fe33614a88e47091ff"} err="failed to get container status \"55e33d9e4a2f48076bebaa29f774aa42b701e2bb71b502fe33614a88e47091ff\": rpc error: code = NotFound desc = could not find container \"55e33d9e4a2f48076bebaa29f774aa42b701e2bb71b502fe33614a88e47091ff\": container with ID starting with 55e33d9e4a2f48076bebaa29f774aa42b701e2bb71b502fe33614a88e47091ff not found: ID does not exist" Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.154627 4699 scope.go:117] "RemoveContainer" containerID="1140853f450f2ac7dc6b93a4d99c33ede9a8bdc36ddcd35c196e7946c551da81" Feb 26 11:39:17 crc kubenswrapper[4699]: E0226 11:39:17.155086 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1140853f450f2ac7dc6b93a4d99c33ede9a8bdc36ddcd35c196e7946c551da81\": container with ID starting with 1140853f450f2ac7dc6b93a4d99c33ede9a8bdc36ddcd35c196e7946c551da81 not found: ID does not exist" containerID="1140853f450f2ac7dc6b93a4d99c33ede9a8bdc36ddcd35c196e7946c551da81" Feb 26 11:39:17 crc kubenswrapper[4699]: I0226 11:39:17.155130 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1140853f450f2ac7dc6b93a4d99c33ede9a8bdc36ddcd35c196e7946c551da81"} err="failed to get container status \"1140853f450f2ac7dc6b93a4d99c33ede9a8bdc36ddcd35c196e7946c551da81\": rpc error: code = NotFound desc = could not find container \"1140853f450f2ac7dc6b93a4d99c33ede9a8bdc36ddcd35c196e7946c551da81\": container with ID starting with 1140853f450f2ac7dc6b93a4d99c33ede9a8bdc36ddcd35c196e7946c551da81 not found: ID does not exist" Feb 26 11:39:18 crc kubenswrapper[4699]: I0226 11:39:18.271842 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00de79e0-b495-44ac-ac69-461dae5cfcea" path="/var/lib/kubelet/pods/00de79e0-b495-44ac-ac69-461dae5cfcea/volumes" Feb 26 11:39:19 crc kubenswrapper[4699]: I0226 11:39:19.261185 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:39:19 crc kubenswrapper[4699]: E0226 11:39:19.261536 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:39:30 crc kubenswrapper[4699]: I0226 11:39:30.260935 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:39:30 crc kubenswrapper[4699]: E0226 11:39:30.261782 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:39:45 crc kubenswrapper[4699]: I0226 11:39:45.261791 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:39:45 crc kubenswrapper[4699]: E0226 11:39:45.262649 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:39:55 crc kubenswrapper[4699]: I0226 11:39:55.163965 4699 generic.go:334] "Generic (PLEG): container finished" podID="fee4a36b-0896-43c1-9b23-3da3ae870cbe" containerID="9da4d0e1b71f7b3bc90f317d243afef6ee0b2480495e8fa6f0ce050f027878f5" exitCode=0 Feb 26 11:39:55 crc kubenswrapper[4699]: I0226 11:39:55.164045 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" event={"ID":"fee4a36b-0896-43c1-9b23-3da3ae870cbe","Type":"ContainerDied","Data":"9da4d0e1b71f7b3bc90f317d243afef6ee0b2480495e8fa6f0ce050f027878f5"} Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.269487 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:39:56 crc kubenswrapper[4699]: E0226 11:39:56.269725 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.595782 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.768456 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-bootstrap-combined-ca-bundle\") pod \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.768509 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw4lj\" (UniqueName: \"kubernetes.io/projected/fee4a36b-0896-43c1-9b23-3da3ae870cbe-kube-api-access-rw4lj\") pod \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.768642 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-inventory\") pod \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.768673 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-ssh-key-openstack-edpm-ipam\") pod \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\" (UID: \"fee4a36b-0896-43c1-9b23-3da3ae870cbe\") " Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.774577 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fee4a36b-0896-43c1-9b23-3da3ae870cbe" (UID: "fee4a36b-0896-43c1-9b23-3da3ae870cbe"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.774822 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee4a36b-0896-43c1-9b23-3da3ae870cbe-kube-api-access-rw4lj" (OuterVolumeSpecName: "kube-api-access-rw4lj") pod "fee4a36b-0896-43c1-9b23-3da3ae870cbe" (UID: "fee4a36b-0896-43c1-9b23-3da3ae870cbe"). InnerVolumeSpecName "kube-api-access-rw4lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.800225 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-inventory" (OuterVolumeSpecName: "inventory") pod "fee4a36b-0896-43c1-9b23-3da3ae870cbe" (UID: "fee4a36b-0896-43c1-9b23-3da3ae870cbe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.806224 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fee4a36b-0896-43c1-9b23-3da3ae870cbe" (UID: "fee4a36b-0896-43c1-9b23-3da3ae870cbe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.870574 4699 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.870611 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw4lj\" (UniqueName: \"kubernetes.io/projected/fee4a36b-0896-43c1-9b23-3da3ae870cbe-kube-api-access-rw4lj\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.870621 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:56 crc kubenswrapper[4699]: I0226 11:39:56.870629 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee4a36b-0896-43c1-9b23-3da3ae870cbe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.185445 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" event={"ID":"fee4a36b-0896-43c1-9b23-3da3ae870cbe","Type":"ContainerDied","Data":"80e4ef9ff110025ca1ad9e0b0c1c51b00737c757e4cae0d1f58cc0b932613fd9"} Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.185488 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.185489 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80e4ef9ff110025ca1ad9e0b0c1c51b00737c757e4cae0d1f58cc0b932613fd9" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.277781 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz"] Feb 26 11:39:57 crc kubenswrapper[4699]: E0226 11:39:57.278523 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00de79e0-b495-44ac-ac69-461dae5cfcea" containerName="registry-server" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.278545 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="00de79e0-b495-44ac-ac69-461dae5cfcea" containerName="registry-server" Feb 26 11:39:57 crc kubenswrapper[4699]: E0226 11:39:57.278576 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee4a36b-0896-43c1-9b23-3da3ae870cbe" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.278585 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee4a36b-0896-43c1-9b23-3da3ae870cbe" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 11:39:57 crc kubenswrapper[4699]: E0226 11:39:57.278608 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00de79e0-b495-44ac-ac69-461dae5cfcea" containerName="extract-utilities" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.278616 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="00de79e0-b495-44ac-ac69-461dae5cfcea" containerName="extract-utilities" Feb 26 11:39:57 crc kubenswrapper[4699]: E0226 11:39:57.278628 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00de79e0-b495-44ac-ac69-461dae5cfcea" containerName="extract-content" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.278635 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="00de79e0-b495-44ac-ac69-461dae5cfcea" containerName="extract-content" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.278892 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="00de79e0-b495-44ac-ac69-461dae5cfcea" containerName="registry-server" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.278922 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee4a36b-0896-43c1-9b23-3da3ae870cbe" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.279717 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.281958 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.282392 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.284061 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.284101 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.295716 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz"] Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.379553 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f97wz\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.379964 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f97wz\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.380077 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zj6f\" (UniqueName: \"kubernetes.io/projected/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-kube-api-access-4zj6f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f97wz\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.482718 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f97wz\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.482839 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f97wz\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.482940 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zj6f\" (UniqueName: \"kubernetes.io/projected/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-kube-api-access-4zj6f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f97wz\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.488801 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f97wz\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.488801 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f97wz\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.505755 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zj6f\" (UniqueName: \"kubernetes.io/projected/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-kube-api-access-4zj6f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f97wz\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:39:57 crc kubenswrapper[4699]: I0226 11:39:57.601308 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:39:58 crc kubenswrapper[4699]: I0226 11:39:58.098196 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz"] Feb 26 11:39:58 crc kubenswrapper[4699]: I0226 11:39:58.194398 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" event={"ID":"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2","Type":"ContainerStarted","Data":"f077cdeff29985bc87c067f11fd69e3bb120e90af57ac246059fdb95f6bcb184"} Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.133109 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535100-2fxw5"] Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.135306 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535100-2fxw5" Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.139861 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.140308 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.140797 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.150240 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjxz8\" (UniqueName: \"kubernetes.io/projected/db34348f-7e21-4666-8e45-c48a1fdbe2a4-kube-api-access-pjxz8\") pod \"auto-csr-approver-29535100-2fxw5\" (UID: \"db34348f-7e21-4666-8e45-c48a1fdbe2a4\") " pod="openshift-infra/auto-csr-approver-29535100-2fxw5" Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.154556 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535100-2fxw5"] Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.212075 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" event={"ID":"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2","Type":"ContainerStarted","Data":"6b432756b4c02ac4dd161ed536fa1431f018acfe6fea2e615d58626a9b11073c"} Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.250948 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" podStartSLOduration=2.015014334 podStartE2EDuration="3.250928939s" podCreationTimestamp="2026-02-26 11:39:57 +0000 UTC" firstStartedPulling="2026-02-26 11:39:58.102985337 +0000 UTC m=+1743.913811771" lastFinishedPulling="2026-02-26 11:39:59.338899942 +0000 UTC m=+1745.149726376" observedRunningTime="2026-02-26 11:40:00.247571743 +0000 UTC m=+1746.058398187" watchObservedRunningTime="2026-02-26 11:40:00.250928939 +0000 UTC m=+1746.061755373" Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.252599 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjxz8\" (UniqueName: \"kubernetes.io/projected/db34348f-7e21-4666-8e45-c48a1fdbe2a4-kube-api-access-pjxz8\") pod \"auto-csr-approver-29535100-2fxw5\" (UID: \"db34348f-7e21-4666-8e45-c48a1fdbe2a4\") " pod="openshift-infra/auto-csr-approver-29535100-2fxw5" Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.275681 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjxz8\" (UniqueName: \"kubernetes.io/projected/db34348f-7e21-4666-8e45-c48a1fdbe2a4-kube-api-access-pjxz8\") pod \"auto-csr-approver-29535100-2fxw5\" (UID: \"db34348f-7e21-4666-8e45-c48a1fdbe2a4\") " pod="openshift-infra/auto-csr-approver-29535100-2fxw5" Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.463645 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535100-2fxw5" Feb 26 11:40:00 crc kubenswrapper[4699]: I0226 11:40:00.947247 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535100-2fxw5"] Feb 26 11:40:00 crc kubenswrapper[4699]: W0226 11:40:00.951424 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb34348f_7e21_4666_8e45_c48a1fdbe2a4.slice/crio-46becb21ca1b56f63c2a251fd67139c1dd9217c8a3b123d6942ff01d48839696 WatchSource:0}: Error finding container 46becb21ca1b56f63c2a251fd67139c1dd9217c8a3b123d6942ff01d48839696: Status 404 returned error can't find the container with id 46becb21ca1b56f63c2a251fd67139c1dd9217c8a3b123d6942ff01d48839696 Feb 26 11:40:01 crc kubenswrapper[4699]: I0226 11:40:01.227370 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535100-2fxw5" event={"ID":"db34348f-7e21-4666-8e45-c48a1fdbe2a4","Type":"ContainerStarted","Data":"46becb21ca1b56f63c2a251fd67139c1dd9217c8a3b123d6942ff01d48839696"} Feb 26 11:40:03 crc kubenswrapper[4699]: I0226 11:40:03.245575 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535100-2fxw5" event={"ID":"db34348f-7e21-4666-8e45-c48a1fdbe2a4","Type":"ContainerStarted","Data":"4505b88d80198e91d210a89e948ba5fb9b137a6a7006ae878e49e6ab4a45d98a"} Feb 26 11:40:03 crc kubenswrapper[4699]: I0226 11:40:03.262028 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535100-2fxw5" podStartSLOduration=1.457652541 podStartE2EDuration="3.262006001s" podCreationTimestamp="2026-02-26 11:40:00 +0000 UTC" firstStartedPulling="2026-02-26 11:40:00.954604655 +0000 UTC m=+1746.765431099" lastFinishedPulling="2026-02-26 11:40:02.758958135 +0000 UTC m=+1748.569784559" observedRunningTime="2026-02-26 11:40:03.260159118 +0000 UTC m=+1749.070985572" watchObservedRunningTime="2026-02-26 11:40:03.262006001 +0000 UTC m=+1749.072832445" Feb 26 11:40:04 crc kubenswrapper[4699]: I0226 11:40:04.262690 4699 generic.go:334] "Generic (PLEG): container finished" podID="db34348f-7e21-4666-8e45-c48a1fdbe2a4" containerID="4505b88d80198e91d210a89e948ba5fb9b137a6a7006ae878e49e6ab4a45d98a" exitCode=0 Feb 26 11:40:04 crc kubenswrapper[4699]: I0226 11:40:04.278836 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535100-2fxw5" event={"ID":"db34348f-7e21-4666-8e45-c48a1fdbe2a4","Type":"ContainerDied","Data":"4505b88d80198e91d210a89e948ba5fb9b137a6a7006ae878e49e6ab4a45d98a"} Feb 26 11:40:05 crc kubenswrapper[4699]: I0226 11:40:05.559362 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535100-2fxw5" Feb 26 11:40:05 crc kubenswrapper[4699]: I0226 11:40:05.754762 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjxz8\" (UniqueName: \"kubernetes.io/projected/db34348f-7e21-4666-8e45-c48a1fdbe2a4-kube-api-access-pjxz8\") pod \"db34348f-7e21-4666-8e45-c48a1fdbe2a4\" (UID: \"db34348f-7e21-4666-8e45-c48a1fdbe2a4\") " Feb 26 11:40:05 crc kubenswrapper[4699]: I0226 11:40:05.761485 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db34348f-7e21-4666-8e45-c48a1fdbe2a4-kube-api-access-pjxz8" (OuterVolumeSpecName: "kube-api-access-pjxz8") pod "db34348f-7e21-4666-8e45-c48a1fdbe2a4" (UID: "db34348f-7e21-4666-8e45-c48a1fdbe2a4"). InnerVolumeSpecName "kube-api-access-pjxz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:40:05 crc kubenswrapper[4699]: I0226 11:40:05.857080 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjxz8\" (UniqueName: \"kubernetes.io/projected/db34348f-7e21-4666-8e45-c48a1fdbe2a4-kube-api-access-pjxz8\") on node \"crc\" DevicePath \"\"" Feb 26 11:40:06 crc kubenswrapper[4699]: I0226 11:40:06.281365 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535100-2fxw5" event={"ID":"db34348f-7e21-4666-8e45-c48a1fdbe2a4","Type":"ContainerDied","Data":"46becb21ca1b56f63c2a251fd67139c1dd9217c8a3b123d6942ff01d48839696"} Feb 26 11:40:06 crc kubenswrapper[4699]: I0226 11:40:06.281417 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46becb21ca1b56f63c2a251fd67139c1dd9217c8a3b123d6942ff01d48839696" Feb 26 11:40:06 crc kubenswrapper[4699]: I0226 11:40:06.281438 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535100-2fxw5" Feb 26 11:40:06 crc kubenswrapper[4699]: I0226 11:40:06.340835 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535094-ccf5t"] Feb 26 11:40:06 crc kubenswrapper[4699]: I0226 11:40:06.350501 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535094-ccf5t"] Feb 26 11:40:07 crc kubenswrapper[4699]: I0226 11:40:07.263164 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:40:07 crc kubenswrapper[4699]: E0226 11:40:07.263600 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:40:08 crc kubenswrapper[4699]: I0226 11:40:08.271194 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a63cbb99-64c1-46fe-99eb-0d06cc310cba" path="/var/lib/kubelet/pods/a63cbb99-64c1-46fe-99eb-0d06cc310cba/volumes" Feb 26 11:40:19 crc kubenswrapper[4699]: I0226 11:40:19.262278 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:40:19 crc kubenswrapper[4699]: E0226 11:40:19.263157 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:40:28 crc kubenswrapper[4699]: I0226 11:40:28.922811 4699 scope.go:117] "RemoveContainer" containerID="2fbcb8eac2ddc22c3ecc04313ce75c8a329d85e31714a8bfe7dae5bd6310f0ad" Feb 26 11:40:28 crc kubenswrapper[4699]: I0226 11:40:28.988556 4699 scope.go:117] "RemoveContainer" containerID="50c7ddb03e58cd9791ab6f41d1755213bce0ea0826aec0f5b6934548dfaf9782" Feb 26 11:40:29 crc kubenswrapper[4699]: I0226 11:40:29.023826 4699 scope.go:117] "RemoveContainer" containerID="9bee82430e4d84a9497e3680da14bb7fec649ba1905937229370f30514994319" Feb 26 11:40:34 crc kubenswrapper[4699]: I0226 11:40:34.260839 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:40:34 crc kubenswrapper[4699]: E0226 11:40:34.261683 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:40:40 crc kubenswrapper[4699]: I0226 11:40:40.037424 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-29gg4"] Feb 26 11:40:40 crc kubenswrapper[4699]: I0226 11:40:40.047805 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8e68-account-create-update-bwkx8"] Feb 26 11:40:40 crc kubenswrapper[4699]: I0226 11:40:40.058244 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-29gg4"] Feb 26 11:40:40 crc kubenswrapper[4699]: I0226 11:40:40.069221 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8e68-account-create-update-bwkx8"] Feb 26 11:40:40 crc kubenswrapper[4699]: I0226 11:40:40.273493 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9392947-cd31-4afd-92c7-73bac0d4cbd3" path="/var/lib/kubelet/pods/e9392947-cd31-4afd-92c7-73bac0d4cbd3/volumes" Feb 26 11:40:40 crc kubenswrapper[4699]: I0226 11:40:40.274328 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6e3aace-8f02-410d-8e7e-4fa61336435b" path="/var/lib/kubelet/pods/f6e3aace-8f02-410d-8e7e-4fa61336435b/volumes" Feb 26 11:40:47 crc kubenswrapper[4699]: I0226 11:40:47.260513 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:40:47 crc kubenswrapper[4699]: E0226 11:40:47.261271 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:40:49 crc kubenswrapper[4699]: I0226 11:40:49.039982 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-nhpn8"] Feb 26 11:40:49 crc kubenswrapper[4699]: I0226 11:40:49.051319 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f9e8-account-create-update-zqq4d"] Feb 26 11:40:49 crc kubenswrapper[4699]: I0226 11:40:49.062496 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f9e8-account-create-update-zqq4d"] Feb 26 11:40:49 crc kubenswrapper[4699]: I0226 11:40:49.073519 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-nhpn8"] Feb 26 11:40:50 crc kubenswrapper[4699]: I0226 11:40:50.030351 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0fa1-account-create-update-l7dhx"] Feb 26 11:40:50 crc kubenswrapper[4699]: I0226 11:40:50.040379 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0fa1-account-create-update-l7dhx"] Feb 26 11:40:50 crc kubenswrapper[4699]: I0226 11:40:50.048197 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-htqpz"] Feb 26 11:40:50 crc kubenswrapper[4699]: I0226 11:40:50.055702 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-htqpz"] Feb 26 11:40:50 crc kubenswrapper[4699]: I0226 11:40:50.271001 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e74821a-c4e5-4812-829d-c6b60b6657b8" path="/var/lib/kubelet/pods/0e74821a-c4e5-4812-829d-c6b60b6657b8/volumes" Feb 26 11:40:50 crc kubenswrapper[4699]: I0226 11:40:50.271670 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d08e57-ba28-4614-8b11-2bd1bd4f836f" path="/var/lib/kubelet/pods/22d08e57-ba28-4614-8b11-2bd1bd4f836f/volumes" Feb 26 11:40:50 crc kubenswrapper[4699]: I0226 11:40:50.272243 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64b0134d-d882-4622-86a4-ab8172ee4fb2" path="/var/lib/kubelet/pods/64b0134d-d882-4622-86a4-ab8172ee4fb2/volumes" Feb 26 11:40:50 crc kubenswrapper[4699]: I0226 11:40:50.272806 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c209748-0c47-4bbb-883b-f4c245b6a156" path="/var/lib/kubelet/pods/9c209748-0c47-4bbb-883b-f4c245b6a156/volumes" Feb 26 11:40:59 crc kubenswrapper[4699]: I0226 11:40:59.261203 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:40:59 crc kubenswrapper[4699]: E0226 11:40:59.262015 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:41:05 crc kubenswrapper[4699]: I0226 11:41:05.042174 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gjgfc"] Feb 26 11:41:05 crc kubenswrapper[4699]: I0226 11:41:05.054180 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gjgfc"] Feb 26 11:41:06 crc kubenswrapper[4699]: I0226 11:41:06.276438 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c102f5c-cbaf-429e-b487-8b179f989720" path="/var/lib/kubelet/pods/7c102f5c-cbaf-429e-b487-8b179f989720/volumes" Feb 26 11:41:10 crc kubenswrapper[4699]: I0226 11:41:10.266188 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:41:10 crc kubenswrapper[4699]: E0226 11:41:10.269923 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.033244 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5a9e-account-create-update-fzhw8"] Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.049652 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4fx8g"] Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.065283 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5a9e-account-create-update-fzhw8"] Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.075871 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4fx8g"] Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.087635 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bl9wp"] Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.096280 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a7a2-account-create-update-l2mt4"] Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.105031 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f3b2-account-create-update-xhgnq"] Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.112883 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f3b2-account-create-update-xhgnq"] Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.127365 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a7a2-account-create-update-l2mt4"] Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.141433 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bl9wp"] Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.152671 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-v77r5"] Feb 26 11:41:11 crc kubenswrapper[4699]: I0226 11:41:11.160997 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-v77r5"] Feb 26 11:41:12 crc kubenswrapper[4699]: I0226 11:41:12.271321 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1029eddb-2336-4ec5-af4a-b8fed82d3d55" path="/var/lib/kubelet/pods/1029eddb-2336-4ec5-af4a-b8fed82d3d55/volumes" Feb 26 11:41:12 crc kubenswrapper[4699]: I0226 11:41:12.272924 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c910eba-ce23-4fd9-b08a-54b96fe6a2da" path="/var/lib/kubelet/pods/4c910eba-ce23-4fd9-b08a-54b96fe6a2da/volumes" Feb 26 11:41:12 crc kubenswrapper[4699]: I0226 11:41:12.274147 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9e36d9-5d53-46d8-a91a-22dc9338ab58" path="/var/lib/kubelet/pods/5c9e36d9-5d53-46d8-a91a-22dc9338ab58/volumes" Feb 26 11:41:12 crc kubenswrapper[4699]: I0226 11:41:12.275228 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="758bbe1c-d826-47f7-aff6-54e9fc4ebe63" path="/var/lib/kubelet/pods/758bbe1c-d826-47f7-aff6-54e9fc4ebe63/volumes" Feb 26 11:41:12 crc kubenswrapper[4699]: I0226 11:41:12.276930 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a68fa18-1c49-4d3d-bc5f-75763944d818" path="/var/lib/kubelet/pods/7a68fa18-1c49-4d3d-bc5f-75763944d818/volumes" Feb 26 11:41:12 crc kubenswrapper[4699]: I0226 11:41:12.277900 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c25243e-b6d9-40f5-9c3b-31947cf74cc9" path="/var/lib/kubelet/pods/8c25243e-b6d9-40f5-9c3b-31947cf74cc9/volumes" Feb 26 11:41:15 crc kubenswrapper[4699]: I0226 11:41:15.033896 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-v9z8k"] Feb 26 11:41:15 crc kubenswrapper[4699]: I0226 11:41:15.047136 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-nblvp"] Feb 26 11:41:15 crc kubenswrapper[4699]: I0226 11:41:15.059391 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-v9z8k"] Feb 26 11:41:15 crc kubenswrapper[4699]: I0226 11:41:15.067006 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-nblvp"] Feb 26 11:41:16 crc kubenswrapper[4699]: I0226 11:41:16.272829 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c1d656-4f85-483b-b7a2-6132b71ae093" path="/var/lib/kubelet/pods/72c1d656-4f85-483b-b7a2-6132b71ae093/volumes" Feb 26 11:41:16 crc kubenswrapper[4699]: I0226 11:41:16.274632 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f040612-306e-4ce2-b289-ed5be7bbc9e3" path="/var/lib/kubelet/pods/7f040612-306e-4ce2-b289-ed5be7bbc9e3/volumes" Feb 26 11:41:22 crc kubenswrapper[4699]: I0226 11:41:22.262110 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:41:22 crc kubenswrapper[4699]: E0226 11:41:22.263067 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.159282 4699 scope.go:117] "RemoveContainer" containerID="d84c1ad7d451293243927fb877d730897ca18c570d340c3870da5a49cf7b4e49" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.188024 4699 scope.go:117] "RemoveContainer" containerID="f9bc95d14d4ca0f4150bed4b727cc55b90093e4c3307ebc23256f5bd6248badb" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.234566 4699 scope.go:117] "RemoveContainer" containerID="5b4e9b46d7abb3978f9445cbfeebb825f9cd664cf115705fdae6f65a2a171de8" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.310993 4699 scope.go:117] "RemoveContainer" containerID="99b2baa30a79cd9b1afa4299366118e58d2c6c18512f6454267d08d3b636f3e6" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.390571 4699 scope.go:117] "RemoveContainer" containerID="8ac6484a77ece8a11d14d59104b361e660535022ac1b3f3359289cdf598c1ea3" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.413325 4699 scope.go:117] "RemoveContainer" containerID="9e6e239d14eb5fdc0f0fee3107f485263c4c1938d985d9c817ca4f3885c7de71" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.455536 4699 scope.go:117] "RemoveContainer" containerID="e9c4f64540efb8ca94268435547206be7e8a21ea869414c0e0fe3fdc2ad23ae0" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.473750 4699 scope.go:117] "RemoveContainer" containerID="0d9733430c4e718e7aff62771d81bae98ffdfc65e518351b1e877ae065bfd725" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.496544 4699 scope.go:117] "RemoveContainer" containerID="7c9888c6347c41b14207598f1324ae87027fe21cf208ac04db043c3350762dde" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.523764 4699 scope.go:117] "RemoveContainer" containerID="02517dfaa484539c60d2ef72e32d7a113f0b9a11e109ec31ac01691b7f015d05" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.544498 4699 scope.go:117] "RemoveContainer" containerID="91516e9d3caed541543b28d1d1f9c624822ee3d8a280a0f3e6e9514175f1fe30" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.569015 4699 scope.go:117] "RemoveContainer" containerID="c5f501a1150c4caded935575b10f8f9230324616853238eace0db08d01347483" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.590247 4699 scope.go:117] "RemoveContainer" containerID="6bf24901f54aea8222e7ac0b7dea606ea0a09d83f0dad7544b8e7bc98249b1e8" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.619016 4699 scope.go:117] "RemoveContainer" containerID="6a7d35b314cb71b7aea626b804eac24b58050ec797d6079e6362282e3f1a7a28" Feb 26 11:41:29 crc kubenswrapper[4699]: I0226 11:41:29.645867 4699 scope.go:117] "RemoveContainer" containerID="f56c01ae851446ecb80715a4bf6a848caa81425dc5709a8852bd80e336fdb67f" Feb 26 11:41:33 crc kubenswrapper[4699]: I0226 11:41:33.087135 4699 generic.go:334] "Generic (PLEG): container finished" podID="8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2" containerID="6b432756b4c02ac4dd161ed536fa1431f018acfe6fea2e615d58626a9b11073c" exitCode=0 Feb 26 11:41:33 crc kubenswrapper[4699]: I0226 11:41:33.087178 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" event={"ID":"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2","Type":"ContainerDied","Data":"6b432756b4c02ac4dd161ed536fa1431f018acfe6fea2e615d58626a9b11073c"} Feb 26 11:41:34 crc kubenswrapper[4699]: I0226 11:41:34.261028 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:41:34 crc kubenswrapper[4699]: E0226 11:41:34.261919 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:41:34 crc kubenswrapper[4699]: I0226 11:41:34.527572 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:41:34 crc kubenswrapper[4699]: I0226 11:41:34.701137 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-inventory\") pod \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " Feb 26 11:41:34 crc kubenswrapper[4699]: I0226 11:41:34.701268 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-ssh-key-openstack-edpm-ipam\") pod \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " Feb 26 11:41:34 crc kubenswrapper[4699]: I0226 11:41:34.701438 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zj6f\" (UniqueName: \"kubernetes.io/projected/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-kube-api-access-4zj6f\") pod \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\" (UID: \"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2\") " Feb 26 11:41:34 crc kubenswrapper[4699]: I0226 11:41:34.707975 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-kube-api-access-4zj6f" (OuterVolumeSpecName: "kube-api-access-4zj6f") pod "8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2" (UID: "8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2"). InnerVolumeSpecName "kube-api-access-4zj6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:41:34 crc kubenswrapper[4699]: I0226 11:41:34.733769 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-inventory" (OuterVolumeSpecName: "inventory") pod "8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2" (UID: "8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:41:34 crc kubenswrapper[4699]: I0226 11:41:34.738000 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2" (UID: "8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:41:34 crc kubenswrapper[4699]: I0226 11:41:34.803825 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zj6f\" (UniqueName: \"kubernetes.io/projected/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-kube-api-access-4zj6f\") on node \"crc\" DevicePath \"\"" Feb 26 11:41:34 crc kubenswrapper[4699]: I0226 11:41:34.804202 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:41:34 crc kubenswrapper[4699]: I0226 11:41:34.804213 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.105804 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" event={"ID":"8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2","Type":"ContainerDied","Data":"f077cdeff29985bc87c067f11fd69e3bb120e90af57ac246059fdb95f6bcb184"} Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.106043 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f077cdeff29985bc87c067f11fd69e3bb120e90af57ac246059fdb95f6bcb184" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.105877 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f97wz" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.194979 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7"] Feb 26 11:41:35 crc kubenswrapper[4699]: E0226 11:41:35.195689 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.195715 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 11:41:35 crc kubenswrapper[4699]: E0226 11:41:35.195773 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db34348f-7e21-4666-8e45-c48a1fdbe2a4" containerName="oc" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.195783 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="db34348f-7e21-4666-8e45-c48a1fdbe2a4" containerName="oc" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.196037 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="db34348f-7e21-4666-8e45-c48a1fdbe2a4" containerName="oc" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.196071 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.196779 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.199177 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.199394 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.199575 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.199760 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.211412 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7"] Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.313378 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4wqj\" (UniqueName: \"kubernetes.io/projected/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-kube-api-access-z4wqj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-86gl7\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.313480 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-86gl7\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.313712 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-86gl7\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.415464 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4wqj\" (UniqueName: \"kubernetes.io/projected/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-kube-api-access-z4wqj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-86gl7\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.416150 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-86gl7\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.416278 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-86gl7\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.420491 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-86gl7\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.427833 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-86gl7\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.435595 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4wqj\" (UniqueName: \"kubernetes.io/projected/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-kube-api-access-z4wqj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-86gl7\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:41:35 crc kubenswrapper[4699]: I0226 11:41:35.512840 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:41:36 crc kubenswrapper[4699]: I0226 11:41:36.064752 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7"] Feb 26 11:41:36 crc kubenswrapper[4699]: I0226 11:41:36.071619 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:41:36 crc kubenswrapper[4699]: I0226 11:41:36.114791 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" event={"ID":"b1a06be0-15ce-4abd-b9e7-7e11e789bd64","Type":"ContainerStarted","Data":"db75ae825dbb40d97a2b9db69df2b648d27c8bcd6afdccffa8c07497a1f62677"} Feb 26 11:41:37 crc kubenswrapper[4699]: I0226 11:41:37.127755 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" event={"ID":"b1a06be0-15ce-4abd-b9e7-7e11e789bd64","Type":"ContainerStarted","Data":"06dd2f994e026e3d5c71102e70c0d33cced4374bc16162d705261194153c852c"} Feb 26 11:41:37 crc kubenswrapper[4699]: I0226 11:41:37.151318 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" podStartSLOduration=1.7427489729999999 podStartE2EDuration="2.151292805s" podCreationTimestamp="2026-02-26 11:41:35 +0000 UTC" firstStartedPulling="2026-02-26 11:41:36.07127231 +0000 UTC m=+1841.882098754" lastFinishedPulling="2026-02-26 11:41:36.479816152 +0000 UTC m=+1842.290642586" observedRunningTime="2026-02-26 11:41:37.141816701 +0000 UTC m=+1842.952643135" watchObservedRunningTime="2026-02-26 11:41:37.151292805 +0000 UTC m=+1842.962119239" Feb 26 11:41:45 crc kubenswrapper[4699]: I0226 11:41:45.261243 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:41:45 crc kubenswrapper[4699]: E0226 11:41:45.262038 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:41:46 crc kubenswrapper[4699]: I0226 11:41:46.615494 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-dr78q"] Feb 26 11:41:46 crc kubenswrapper[4699]: I0226 11:41:46.631965 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-dr78q"] Feb 26 11:41:48 crc kubenswrapper[4699]: I0226 11:41:48.270467 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae813248-510e-4b19-bcd8-39cefca6cd37" path="/var/lib/kubelet/pods/ae813248-510e-4b19-bcd8-39cefca6cd37/volumes" Feb 26 11:41:53 crc kubenswrapper[4699]: I0226 11:41:53.027435 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7g59c"] Feb 26 11:41:53 crc kubenswrapper[4699]: I0226 11:41:53.036400 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7g59c"] Feb 26 11:41:54 crc kubenswrapper[4699]: I0226 11:41:54.274100 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45d20cb-c561-4b84-b327-9b096865e8bb" path="/var/lib/kubelet/pods/d45d20cb-c561-4b84-b327-9b096865e8bb/volumes" Feb 26 11:41:58 crc kubenswrapper[4699]: I0226 11:41:58.261910 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:41:58 crc kubenswrapper[4699]: E0226 11:41:58.263466 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.045857 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-z6w9z"] Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.058844 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-z6w9z"] Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.070356 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-28v5g"] Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.078311 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-28v5g"] Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.142407 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535102-2zbvr"] Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.144323 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535102-2zbvr" Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.147060 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.147637 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.147785 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.165975 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535102-2zbvr"] Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.552345 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a9d008-5b7e-4866-b92b-efcb60cbfdb0" path="/var/lib/kubelet/pods/47a9d008-5b7e-4866-b92b-efcb60cbfdb0/volumes" Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.553206 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b33c7b6e-a78a-4a10-848c-a65d01deee0b" path="/var/lib/kubelet/pods/b33c7b6e-a78a-4a10-848c-a65d01deee0b/volumes" Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.640586 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d22sr\" (UniqueName: \"kubernetes.io/projected/1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984-kube-api-access-d22sr\") pod \"auto-csr-approver-29535102-2zbvr\" (UID: \"1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984\") " pod="openshift-infra/auto-csr-approver-29535102-2zbvr" Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.743022 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d22sr\" (UniqueName: \"kubernetes.io/projected/1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984-kube-api-access-d22sr\") pod \"auto-csr-approver-29535102-2zbvr\" (UID: \"1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984\") " pod="openshift-infra/auto-csr-approver-29535102-2zbvr" Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.761828 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d22sr\" (UniqueName: \"kubernetes.io/projected/1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984-kube-api-access-d22sr\") pod \"auto-csr-approver-29535102-2zbvr\" (UID: \"1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984\") " pod="openshift-infra/auto-csr-approver-29535102-2zbvr" Feb 26 11:42:00 crc kubenswrapper[4699]: I0226 11:42:00.858871 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535102-2zbvr" Feb 26 11:42:01 crc kubenswrapper[4699]: I0226 11:42:01.329694 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535102-2zbvr"] Feb 26 11:42:01 crc kubenswrapper[4699]: I0226 11:42:01.705863 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535102-2zbvr" event={"ID":"1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984","Type":"ContainerStarted","Data":"a18abcf8f2dfd7199fbcf5f7f1c9ab4491141187d87e951eae13077028a31efd"} Feb 26 11:42:03 crc kubenswrapper[4699]: I0226 11:42:03.726240 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535102-2zbvr" event={"ID":"1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984","Type":"ContainerStarted","Data":"dfc62ad99cdddeccaa0a04e48b0be130dad6cc30569fc90d45e5fa7beabda285"} Feb 26 11:42:03 crc kubenswrapper[4699]: I0226 11:42:03.740417 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535102-2zbvr" podStartSLOduration=1.7117044529999998 podStartE2EDuration="3.74039234s" podCreationTimestamp="2026-02-26 11:42:00 +0000 UTC" firstStartedPulling="2026-02-26 11:42:01.332841617 +0000 UTC m=+1867.143668051" lastFinishedPulling="2026-02-26 11:42:03.361529494 +0000 UTC m=+1869.172355938" observedRunningTime="2026-02-26 11:42:03.739182907 +0000 UTC m=+1869.550009361" watchObservedRunningTime="2026-02-26 11:42:03.74039234 +0000 UTC m=+1869.551218784" Feb 26 11:42:04 crc kubenswrapper[4699]: I0226 11:42:04.741092 4699 generic.go:334] "Generic (PLEG): container finished" podID="1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984" containerID="dfc62ad99cdddeccaa0a04e48b0be130dad6cc30569fc90d45e5fa7beabda285" exitCode=0 Feb 26 11:42:04 crc kubenswrapper[4699]: I0226 11:42:04.741173 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535102-2zbvr" event={"ID":"1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984","Type":"ContainerDied","Data":"dfc62ad99cdddeccaa0a04e48b0be130dad6cc30569fc90d45e5fa7beabda285"} Feb 26 11:42:06 crc kubenswrapper[4699]: I0226 11:42:06.106269 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535102-2zbvr" Feb 26 11:42:06 crc kubenswrapper[4699]: I0226 11:42:06.376007 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d22sr\" (UniqueName: \"kubernetes.io/projected/1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984-kube-api-access-d22sr\") pod \"1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984\" (UID: \"1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984\") " Feb 26 11:42:06 crc kubenswrapper[4699]: I0226 11:42:06.386143 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984-kube-api-access-d22sr" (OuterVolumeSpecName: "kube-api-access-d22sr") pod "1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984" (UID: "1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984"). InnerVolumeSpecName "kube-api-access-d22sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:42:06 crc kubenswrapper[4699]: I0226 11:42:06.482156 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d22sr\" (UniqueName: \"kubernetes.io/projected/1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984-kube-api-access-d22sr\") on node \"crc\" DevicePath \"\"" Feb 26 11:42:06 crc kubenswrapper[4699]: I0226 11:42:06.827521 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535102-2zbvr" event={"ID":"1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984","Type":"ContainerDied","Data":"a18abcf8f2dfd7199fbcf5f7f1c9ab4491141187d87e951eae13077028a31efd"} Feb 26 11:42:06 crc kubenswrapper[4699]: I0226 11:42:06.827581 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a18abcf8f2dfd7199fbcf5f7f1c9ab4491141187d87e951eae13077028a31efd" Feb 26 11:42:06 crc kubenswrapper[4699]: I0226 11:42:06.827655 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535102-2zbvr" Feb 26 11:42:07 crc kubenswrapper[4699]: I0226 11:42:07.009871 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535096-xr7rk"] Feb 26 11:42:07 crc kubenswrapper[4699]: I0226 11:42:07.022829 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535096-xr7rk"] Feb 26 11:42:08 crc kubenswrapper[4699]: I0226 11:42:08.276437 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b65e61c-3853-4fd6-93c2-9d13c6776589" path="/var/lib/kubelet/pods/6b65e61c-3853-4fd6-93c2-9d13c6776589/volumes" Feb 26 11:42:11 crc kubenswrapper[4699]: I0226 11:42:11.260692 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:42:11 crc kubenswrapper[4699]: E0226 11:42:11.261513 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:42:23 crc kubenswrapper[4699]: I0226 11:42:23.261570 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:42:23 crc kubenswrapper[4699]: E0226 11:42:23.262519 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:42:24 crc kubenswrapper[4699]: I0226 11:42:24.031327 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-f49xd"] Feb 26 11:42:24 crc kubenswrapper[4699]: I0226 11:42:24.038749 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-f49xd"] Feb 26 11:42:24 crc kubenswrapper[4699]: I0226 11:42:24.270614 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8426fd89-9eba-46fa-8611-e98cc7636b41" path="/var/lib/kubelet/pods/8426fd89-9eba-46fa-8611-e98cc7636b41/volumes" Feb 26 11:42:29 crc kubenswrapper[4699]: I0226 11:42:29.969685 4699 scope.go:117] "RemoveContainer" containerID="861736c6decfb2ac1c3010699205e1df4da771409780863184ec8e9136dd76db" Feb 26 11:42:30 crc kubenswrapper[4699]: I0226 11:42:30.013544 4699 scope.go:117] "RemoveContainer" containerID="4266f5dcbf67cb6303072faf9cd69cd6aabcaee0bb9544fa39ab82b24cc3c4e5" Feb 26 11:42:30 crc kubenswrapper[4699]: I0226 11:42:30.059273 4699 scope.go:117] "RemoveContainer" containerID="45bdc052e6dc259f4ccec396b223ed5d541f623efae769fc3c166913b1ca187a" Feb 26 11:42:30 crc kubenswrapper[4699]: I0226 11:42:30.153043 4699 scope.go:117] "RemoveContainer" containerID="2cec29afd9941e14f3e1571b5331427d3b1faa6723571c88143afc902d980bd2" Feb 26 11:42:30 crc kubenswrapper[4699]: I0226 11:42:30.215656 4699 scope.go:117] "RemoveContainer" containerID="dd9ce01dbb3d28e8559eda1261c169a7dbac7ba191f3aabd0c7a5d33511f3c12" Feb 26 11:42:30 crc kubenswrapper[4699]: I0226 11:42:30.273688 4699 scope.go:117] "RemoveContainer" containerID="0eab0de6a835999edb566f7a018ef04e992296918bfb17f761cbea8ef8c3775a" Feb 26 11:42:38 crc kubenswrapper[4699]: I0226 11:42:38.260944 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:42:38 crc kubenswrapper[4699]: E0226 11:42:38.261699 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:42:52 crc kubenswrapper[4699]: I0226 11:42:52.260493 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:42:53 crc kubenswrapper[4699]: I0226 11:42:53.018453 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"6ef034a72d27c84dbd807adb1a50ce258b1b8022f1d940a8fb612e62f1d33345"} Feb 26 11:42:55 crc kubenswrapper[4699]: I0226 11:42:55.037012 4699 generic.go:334] "Generic (PLEG): container finished" podID="b1a06be0-15ce-4abd-b9e7-7e11e789bd64" containerID="06dd2f994e026e3d5c71102e70c0d33cced4374bc16162d705261194153c852c" exitCode=0 Feb 26 11:42:55 crc kubenswrapper[4699]: I0226 11:42:55.037175 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" event={"ID":"b1a06be0-15ce-4abd-b9e7-7e11e789bd64","Type":"ContainerDied","Data":"06dd2f994e026e3d5c71102e70c0d33cced4374bc16162d705261194153c852c"} Feb 26 11:42:56 crc kubenswrapper[4699]: I0226 11:42:56.459094 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:42:56 crc kubenswrapper[4699]: I0226 11:42:56.590705 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4wqj\" (UniqueName: \"kubernetes.io/projected/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-kube-api-access-z4wqj\") pod \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " Feb 26 11:42:56 crc kubenswrapper[4699]: I0226 11:42:56.590862 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-ssh-key-openstack-edpm-ipam\") pod \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " Feb 26 11:42:56 crc kubenswrapper[4699]: I0226 11:42:56.591079 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-inventory\") pod \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\" (UID: \"b1a06be0-15ce-4abd-b9e7-7e11e789bd64\") " Feb 26 11:42:56 crc kubenswrapper[4699]: I0226 11:42:56.596761 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-kube-api-access-z4wqj" (OuterVolumeSpecName: "kube-api-access-z4wqj") pod "b1a06be0-15ce-4abd-b9e7-7e11e789bd64" (UID: "b1a06be0-15ce-4abd-b9e7-7e11e789bd64"). InnerVolumeSpecName "kube-api-access-z4wqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:42:56 crc kubenswrapper[4699]: I0226 11:42:56.619138 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b1a06be0-15ce-4abd-b9e7-7e11e789bd64" (UID: "b1a06be0-15ce-4abd-b9e7-7e11e789bd64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:42:56 crc kubenswrapper[4699]: I0226 11:42:56.632661 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-inventory" (OuterVolumeSpecName: "inventory") pod "b1a06be0-15ce-4abd-b9e7-7e11e789bd64" (UID: "b1a06be0-15ce-4abd-b9e7-7e11e789bd64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:42:56 crc kubenswrapper[4699]: I0226 11:42:56.693618 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:42:56 crc kubenswrapper[4699]: I0226 11:42:56.693655 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:42:56 crc kubenswrapper[4699]: I0226 11:42:56.693666 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4wqj\" (UniqueName: \"kubernetes.io/projected/b1a06be0-15ce-4abd-b9e7-7e11e789bd64-kube-api-access-z4wqj\") on node \"crc\" DevicePath \"\"" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.056009 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" event={"ID":"b1a06be0-15ce-4abd-b9e7-7e11e789bd64","Type":"ContainerDied","Data":"db75ae825dbb40d97a2b9db69df2b648d27c8bcd6afdccffa8c07497a1f62677"} Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.056339 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db75ae825dbb40d97a2b9db69df2b648d27c8bcd6afdccffa8c07497a1f62677" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.056189 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-86gl7" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.149561 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm"] Feb 26 11:42:57 crc kubenswrapper[4699]: E0226 11:42:57.150022 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a06be0-15ce-4abd-b9e7-7e11e789bd64" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.150042 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a06be0-15ce-4abd-b9e7-7e11e789bd64" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:42:57 crc kubenswrapper[4699]: E0226 11:42:57.150067 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984" containerName="oc" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.150074 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984" containerName="oc" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.150262 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984" containerName="oc" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.150303 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a06be0-15ce-4abd-b9e7-7e11e789bd64" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.153265 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.155737 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.156494 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.156666 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.157752 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.171268 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm"] Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.305415 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs8ws\" (UniqueName: \"kubernetes.io/projected/974c869a-b430-4a83-81d0-ece37d67c0b0-kube-api-access-gs8ws\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9npsm\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.305764 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9npsm\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.306197 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9npsm\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.408383 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9npsm\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.408481 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs8ws\" (UniqueName: \"kubernetes.io/projected/974c869a-b430-4a83-81d0-ece37d67c0b0-kube-api-access-gs8ws\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9npsm\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.408526 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9npsm\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.420010 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9npsm\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.420239 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9npsm\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.433133 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs8ws\" (UniqueName: \"kubernetes.io/projected/974c869a-b430-4a83-81d0-ece37d67c0b0-kube-api-access-gs8ws\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9npsm\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.473312 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:42:57 crc kubenswrapper[4699]: I0226 11:42:57.964622 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm"] Feb 26 11:42:58 crc kubenswrapper[4699]: I0226 11:42:58.068165 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" event={"ID":"974c869a-b430-4a83-81d0-ece37d67c0b0","Type":"ContainerStarted","Data":"069297dd71fe712a0a36e6e82a7ee33d0dad62eba7903614e0f1c84d725d3c0f"} Feb 26 11:43:00 crc kubenswrapper[4699]: I0226 11:43:00.048990 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-snmfx"] Feb 26 11:43:00 crc kubenswrapper[4699]: I0226 11:43:00.059475 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-62mhs"] Feb 26 11:43:00 crc kubenswrapper[4699]: I0226 11:43:00.068864 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-snmfx"] Feb 26 11:43:00 crc kubenswrapper[4699]: I0226 11:43:00.081135 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3146-account-create-update-xf6c8"] Feb 26 11:43:00 crc kubenswrapper[4699]: I0226 11:43:00.089768 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-62mhs"] Feb 26 11:43:00 crc kubenswrapper[4699]: I0226 11:43:00.092083 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" event={"ID":"974c869a-b430-4a83-81d0-ece37d67c0b0","Type":"ContainerStarted","Data":"047a7bcc737231590d42107b96e6ff16ff3d82797549985bb5d0845e611f758d"} Feb 26 11:43:00 crc kubenswrapper[4699]: I0226 11:43:00.098933 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3146-account-create-update-xf6c8"] Feb 26 11:43:00 crc kubenswrapper[4699]: I0226 11:43:00.270816 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5acc31-dbe4-4698-8346-9a0dbc05234b" path="/var/lib/kubelet/pods/6c5acc31-dbe4-4698-8346-9a0dbc05234b/volumes" Feb 26 11:43:00 crc kubenswrapper[4699]: I0226 11:43:00.271447 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e54b257-33a7-43bd-80c5-30915ae82341" path="/var/lib/kubelet/pods/7e54b257-33a7-43bd-80c5-30915ae82341/volumes" Feb 26 11:43:00 crc kubenswrapper[4699]: I0226 11:43:00.272062 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a229eb-75a5-41b1-8342-53a3a1b433a0" path="/var/lib/kubelet/pods/f4a229eb-75a5-41b1-8342-53a3a1b433a0/volumes" Feb 26 11:43:01 crc kubenswrapper[4699]: I0226 11:43:01.019536 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" podStartSLOduration=2.430598463 podStartE2EDuration="4.01949415s" podCreationTimestamp="2026-02-26 11:42:57 +0000 UTC" firstStartedPulling="2026-02-26 11:42:57.965938692 +0000 UTC m=+1923.776765126" lastFinishedPulling="2026-02-26 11:42:59.554834379 +0000 UTC m=+1925.365660813" observedRunningTime="2026-02-26 11:43:00.114687318 +0000 UTC m=+1925.925513762" watchObservedRunningTime="2026-02-26 11:43:01.01949415 +0000 UTC m=+1926.830320584" Feb 26 11:43:01 crc kubenswrapper[4699]: I0226 11:43:01.032219 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-66cf-account-create-update-qvvdk"] Feb 26 11:43:01 crc kubenswrapper[4699]: I0226 11:43:01.044565 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-43f0-account-create-update-vgmlz"] Feb 26 11:43:01 crc kubenswrapper[4699]: I0226 11:43:01.055504 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-66cf-account-create-update-qvvdk"] Feb 26 11:43:01 crc kubenswrapper[4699]: I0226 11:43:01.065396 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-hq69l"] Feb 26 11:43:01 crc kubenswrapper[4699]: I0226 11:43:01.074240 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-43f0-account-create-update-vgmlz"] Feb 26 11:43:01 crc kubenswrapper[4699]: I0226 11:43:01.081929 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-hq69l"] Feb 26 11:43:02 crc kubenswrapper[4699]: I0226 11:43:02.271042 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86425865-434f-43e8-9592-e890078837a2" path="/var/lib/kubelet/pods/86425865-434f-43e8-9592-e890078837a2/volumes" Feb 26 11:43:02 crc kubenswrapper[4699]: I0226 11:43:02.271810 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea40818-89fa-4b78-9833-82635861fee1" path="/var/lib/kubelet/pods/dea40818-89fa-4b78-9833-82635861fee1/volumes" Feb 26 11:43:02 crc kubenswrapper[4699]: I0226 11:43:02.272394 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f99c6b36-a5f6-4f0b-973f-dfa853d2c558" path="/var/lib/kubelet/pods/f99c6b36-a5f6-4f0b-973f-dfa853d2c558/volumes" Feb 26 11:43:05 crc kubenswrapper[4699]: I0226 11:43:05.133909 4699 generic.go:334] "Generic (PLEG): container finished" podID="974c869a-b430-4a83-81d0-ece37d67c0b0" containerID="047a7bcc737231590d42107b96e6ff16ff3d82797549985bb5d0845e611f758d" exitCode=0 Feb 26 11:43:05 crc kubenswrapper[4699]: I0226 11:43:05.133966 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" event={"ID":"974c869a-b430-4a83-81d0-ece37d67c0b0","Type":"ContainerDied","Data":"047a7bcc737231590d42107b96e6ff16ff3d82797549985bb5d0845e611f758d"} Feb 26 11:43:06 crc kubenswrapper[4699]: I0226 11:43:06.537020 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:43:06 crc kubenswrapper[4699]: I0226 11:43:06.689762 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-inventory\") pod \"974c869a-b430-4a83-81d0-ece37d67c0b0\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " Feb 26 11:43:06 crc kubenswrapper[4699]: I0226 11:43:06.689851 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs8ws\" (UniqueName: \"kubernetes.io/projected/974c869a-b430-4a83-81d0-ece37d67c0b0-kube-api-access-gs8ws\") pod \"974c869a-b430-4a83-81d0-ece37d67c0b0\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " Feb 26 11:43:06 crc kubenswrapper[4699]: I0226 11:43:06.689991 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-ssh-key-openstack-edpm-ipam\") pod \"974c869a-b430-4a83-81d0-ece37d67c0b0\" (UID: \"974c869a-b430-4a83-81d0-ece37d67c0b0\") " Feb 26 11:43:06 crc kubenswrapper[4699]: I0226 11:43:06.696775 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/974c869a-b430-4a83-81d0-ece37d67c0b0-kube-api-access-gs8ws" (OuterVolumeSpecName: "kube-api-access-gs8ws") pod "974c869a-b430-4a83-81d0-ece37d67c0b0" (UID: "974c869a-b430-4a83-81d0-ece37d67c0b0"). InnerVolumeSpecName "kube-api-access-gs8ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:43:06 crc kubenswrapper[4699]: I0226 11:43:06.716027 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-inventory" (OuterVolumeSpecName: "inventory") pod "974c869a-b430-4a83-81d0-ece37d67c0b0" (UID: "974c869a-b430-4a83-81d0-ece37d67c0b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:43:06 crc kubenswrapper[4699]: I0226 11:43:06.722023 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "974c869a-b430-4a83-81d0-ece37d67c0b0" (UID: "974c869a-b430-4a83-81d0-ece37d67c0b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:43:06 crc kubenswrapper[4699]: I0226 11:43:06.793302 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:43:06 crc kubenswrapper[4699]: I0226 11:43:06.793448 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs8ws\" (UniqueName: \"kubernetes.io/projected/974c869a-b430-4a83-81d0-ece37d67c0b0-kube-api-access-gs8ws\") on node \"crc\" DevicePath \"\"" Feb 26 11:43:06 crc kubenswrapper[4699]: I0226 11:43:06.793536 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/974c869a-b430-4a83-81d0-ece37d67c0b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.153013 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" event={"ID":"974c869a-b430-4a83-81d0-ece37d67c0b0","Type":"ContainerDied","Data":"069297dd71fe712a0a36e6e82a7ee33d0dad62eba7903614e0f1c84d725d3c0f"} Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.153049 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9npsm" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.153059 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="069297dd71fe712a0a36e6e82a7ee33d0dad62eba7903614e0f1c84d725d3c0f" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.258982 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f"] Feb 26 11:43:07 crc kubenswrapper[4699]: E0226 11:43:07.259732 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="974c869a-b430-4a83-81d0-ece37d67c0b0" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.259758 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="974c869a-b430-4a83-81d0-ece37d67c0b0" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.259952 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="974c869a-b430-4a83-81d0-ece37d67c0b0" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.261038 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.266186 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.267453 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.267453 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.267665 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.273865 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f"] Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.405093 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4kk\" (UniqueName: \"kubernetes.io/projected/ac66647f-74c0-4a4e-9925-e47cd90568a1-kube-api-access-rh4kk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlb2f\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.405161 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlb2f\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.405384 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlb2f\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.508049 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh4kk\" (UniqueName: \"kubernetes.io/projected/ac66647f-74c0-4a4e-9925-e47cd90568a1-kube-api-access-rh4kk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlb2f\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.508172 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlb2f\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.508237 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlb2f\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.513018 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlb2f\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.516723 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlb2f\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.525514 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh4kk\" (UniqueName: \"kubernetes.io/projected/ac66647f-74c0-4a4e-9925-e47cd90568a1-kube-api-access-rh4kk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mlb2f\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:07 crc kubenswrapper[4699]: I0226 11:43:07.577175 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:08 crc kubenswrapper[4699]: I0226 11:43:08.140523 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f"] Feb 26 11:43:08 crc kubenswrapper[4699]: W0226 11:43:08.152445 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac66647f_74c0_4a4e_9925_e47cd90568a1.slice/crio-589d89715ee8910d0add53e05a162f3c15e96c44d15bb00039839ce8af8bf08c WatchSource:0}: Error finding container 589d89715ee8910d0add53e05a162f3c15e96c44d15bb00039839ce8af8bf08c: Status 404 returned error can't find the container with id 589d89715ee8910d0add53e05a162f3c15e96c44d15bb00039839ce8af8bf08c Feb 26 11:43:08 crc kubenswrapper[4699]: I0226 11:43:08.163603 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" event={"ID":"ac66647f-74c0-4a4e-9925-e47cd90568a1","Type":"ContainerStarted","Data":"589d89715ee8910d0add53e05a162f3c15e96c44d15bb00039839ce8af8bf08c"} Feb 26 11:43:09 crc kubenswrapper[4699]: I0226 11:43:09.176463 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" event={"ID":"ac66647f-74c0-4a4e-9925-e47cd90568a1","Type":"ContainerStarted","Data":"37c024ee15929d11af3667b4c33bbdf3d64440abcac66b262307ff7f2f9f1b7f"} Feb 26 11:43:09 crc kubenswrapper[4699]: I0226 11:43:09.201744 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" podStartSLOduration=1.72808494 podStartE2EDuration="2.20171633s" podCreationTimestamp="2026-02-26 11:43:07 +0000 UTC" firstStartedPulling="2026-02-26 11:43:08.156381411 +0000 UTC m=+1933.967207845" lastFinishedPulling="2026-02-26 11:43:08.630012801 +0000 UTC m=+1934.440839235" observedRunningTime="2026-02-26 11:43:09.196800823 +0000 UTC m=+1935.007627277" watchObservedRunningTime="2026-02-26 11:43:09.20171633 +0000 UTC m=+1935.012542774" Feb 26 11:43:28 crc kubenswrapper[4699]: I0226 11:43:28.039699 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vx5jv"] Feb 26 11:43:28 crc kubenswrapper[4699]: I0226 11:43:28.047696 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vx5jv"] Feb 26 11:43:28 crc kubenswrapper[4699]: I0226 11:43:28.275244 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef20f352-fa9c-4bc8-875d-d537f00f75d5" path="/var/lib/kubelet/pods/ef20f352-fa9c-4bc8-875d-d537f00f75d5/volumes" Feb 26 11:43:30 crc kubenswrapper[4699]: I0226 11:43:30.444461 4699 scope.go:117] "RemoveContainer" containerID="b4034fed15cab382c6c5fd47ff21f822b9c9aa9789392181d8ca9fe59c0d233d" Feb 26 11:43:30 crc kubenswrapper[4699]: I0226 11:43:30.511520 4699 scope.go:117] "RemoveContainer" containerID="e2f8c469ec04f6028bf261997ea76ce892a579e71cd0b1e3cbda4d1a898468a0" Feb 26 11:43:30 crc kubenswrapper[4699]: I0226 11:43:30.537385 4699 scope.go:117] "RemoveContainer" containerID="b2b62d6d79c5c992c3884d7e4c7aa453502b8500701d02db975cc913cb332656" Feb 26 11:43:30 crc kubenswrapper[4699]: I0226 11:43:30.592986 4699 scope.go:117] "RemoveContainer" containerID="9eff27ca91f87caa5ed2a02975a6d6bc2e239264a6a323e5cbc0471084500265" Feb 26 11:43:30 crc kubenswrapper[4699]: I0226 11:43:30.687900 4699 scope.go:117] "RemoveContainer" containerID="853cdd9a99dcd559f8a9a9863c9ecd3351cc72fb23481557abd22c41a3816b2d" Feb 26 11:43:30 crc kubenswrapper[4699]: I0226 11:43:30.721886 4699 scope.go:117] "RemoveContainer" containerID="ea224b941b0465af7d8b7b7d5e0297ed56d62f796e3b6566730ce00cb01d16ec" Feb 26 11:43:30 crc kubenswrapper[4699]: I0226 11:43:30.772672 4699 scope.go:117] "RemoveContainer" containerID="0569f07824e60d0703bc892d604ca5230523b1fde72c768bd283ae0d47703780" Feb 26 11:43:44 crc kubenswrapper[4699]: I0226 11:43:44.518778 4699 generic.go:334] "Generic (PLEG): container finished" podID="ac66647f-74c0-4a4e-9925-e47cd90568a1" containerID="37c024ee15929d11af3667b4c33bbdf3d64440abcac66b262307ff7f2f9f1b7f" exitCode=0 Feb 26 11:43:44 crc kubenswrapper[4699]: I0226 11:43:44.518890 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" event={"ID":"ac66647f-74c0-4a4e-9925-e47cd90568a1","Type":"ContainerDied","Data":"37c024ee15929d11af3667b4c33bbdf3d64440abcac66b262307ff7f2f9f1b7f"} Feb 26 11:43:45 crc kubenswrapper[4699]: I0226 11:43:45.938713 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.039019 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mcdml"] Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.046342 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mcdml"] Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.090976 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh4kk\" (UniqueName: \"kubernetes.io/projected/ac66647f-74c0-4a4e-9925-e47cd90568a1-kube-api-access-rh4kk\") pod \"ac66647f-74c0-4a4e-9925-e47cd90568a1\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.091024 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-inventory\") pod \"ac66647f-74c0-4a4e-9925-e47cd90568a1\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.091229 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-ssh-key-openstack-edpm-ipam\") pod \"ac66647f-74c0-4a4e-9925-e47cd90568a1\" (UID: \"ac66647f-74c0-4a4e-9925-e47cd90568a1\") " Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.097611 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac66647f-74c0-4a4e-9925-e47cd90568a1-kube-api-access-rh4kk" (OuterVolumeSpecName: "kube-api-access-rh4kk") pod "ac66647f-74c0-4a4e-9925-e47cd90568a1" (UID: "ac66647f-74c0-4a4e-9925-e47cd90568a1"). InnerVolumeSpecName "kube-api-access-rh4kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.120052 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ac66647f-74c0-4a4e-9925-e47cd90568a1" (UID: "ac66647f-74c0-4a4e-9925-e47cd90568a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.131436 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-inventory" (OuterVolumeSpecName: "inventory") pod "ac66647f-74c0-4a4e-9925-e47cd90568a1" (UID: "ac66647f-74c0-4a4e-9925-e47cd90568a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.192879 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.192910 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh4kk\" (UniqueName: \"kubernetes.io/projected/ac66647f-74c0-4a4e-9925-e47cd90568a1-kube-api-access-rh4kk\") on node \"crc\" DevicePath \"\"" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.192919 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac66647f-74c0-4a4e-9925-e47cd90568a1-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.270558 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f528c9c1-4318-4d46-9b02-43f955e04009" path="/var/lib/kubelet/pods/f528c9c1-4318-4d46-9b02-43f955e04009/volumes" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.549249 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" event={"ID":"ac66647f-74c0-4a4e-9925-e47cd90568a1","Type":"ContainerDied","Data":"589d89715ee8910d0add53e05a162f3c15e96c44d15bb00039839ce8af8bf08c"} Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.549641 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="589d89715ee8910d0add53e05a162f3c15e96c44d15bb00039839ce8af8bf08c" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.549617 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mlb2f" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.639838 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25"] Feb 26 11:43:46 crc kubenswrapper[4699]: E0226 11:43:46.640809 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac66647f-74c0-4a4e-9925-e47cd90568a1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.640894 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac66647f-74c0-4a4e-9925-e47cd90568a1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.641196 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac66647f-74c0-4a4e-9925-e47cd90568a1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.641891 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.648199 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.648499 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.648220 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.661664 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.667616 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25"] Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.806458 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9q25\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.806749 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9q25\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.807264 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv2ll\" (UniqueName: \"kubernetes.io/projected/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-kube-api-access-qv2ll\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9q25\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.909921 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv2ll\" (UniqueName: \"kubernetes.io/projected/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-kube-api-access-qv2ll\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9q25\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.909988 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9q25\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.910099 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9q25\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.915080 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9q25\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.915232 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9q25\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.938788 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv2ll\" (UniqueName: \"kubernetes.io/projected/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-kube-api-access-qv2ll\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9q25\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:43:46 crc kubenswrapper[4699]: I0226 11:43:46.970732 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:43:47 crc kubenswrapper[4699]: I0226 11:43:47.491315 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25"] Feb 26 11:43:47 crc kubenswrapper[4699]: I0226 11:43:47.558304 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" event={"ID":"85e0d37e-fb25-4bbc-afe5-7e6ab304390c","Type":"ContainerStarted","Data":"0a2e58c697cadea58aebf86626b65bde4f82fab06f41769728b26cd2783dc764"} Feb 26 11:43:48 crc kubenswrapper[4699]: I0226 11:43:48.567195 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" event={"ID":"85e0d37e-fb25-4bbc-afe5-7e6ab304390c","Type":"ContainerStarted","Data":"ba3628d7d80420e0729959a1fcf9d498ca569fe68feae991747716a7c0d13fa3"} Feb 26 11:43:48 crc kubenswrapper[4699]: I0226 11:43:48.588238 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" podStartSLOduration=2.164664608 podStartE2EDuration="2.588066053s" podCreationTimestamp="2026-02-26 11:43:46 +0000 UTC" firstStartedPulling="2026-02-26 11:43:47.499482235 +0000 UTC m=+1973.310308669" lastFinishedPulling="2026-02-26 11:43:47.92288368 +0000 UTC m=+1973.733710114" observedRunningTime="2026-02-26 11:43:48.57921296 +0000 UTC m=+1974.390039414" watchObservedRunningTime="2026-02-26 11:43:48.588066053 +0000 UTC m=+1974.398892487" Feb 26 11:43:51 crc kubenswrapper[4699]: I0226 11:43:51.037686 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dz84d"] Feb 26 11:43:51 crc kubenswrapper[4699]: I0226 11:43:51.046682 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dz84d"] Feb 26 11:43:52 crc kubenswrapper[4699]: I0226 11:43:52.272518 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a" path="/var/lib/kubelet/pods/b5fb37dd-bd18-4ada-97c4-3ff3e3555d8a/volumes" Feb 26 11:44:00 crc kubenswrapper[4699]: I0226 11:44:00.138107 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535104-r58dw"] Feb 26 11:44:00 crc kubenswrapper[4699]: I0226 11:44:00.140865 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535104-r58dw" Feb 26 11:44:00 crc kubenswrapper[4699]: I0226 11:44:00.142608 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:44:00 crc kubenswrapper[4699]: I0226 11:44:00.144183 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:44:00 crc kubenswrapper[4699]: I0226 11:44:00.145089 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:44:00 crc kubenswrapper[4699]: I0226 11:44:00.149070 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535104-r58dw"] Feb 26 11:44:00 crc kubenswrapper[4699]: I0226 11:44:00.297497 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqj9x\" (UniqueName: \"kubernetes.io/projected/3a59d7ac-e643-4693-9c6b-994f1fadd83d-kube-api-access-nqj9x\") pod \"auto-csr-approver-29535104-r58dw\" (UID: \"3a59d7ac-e643-4693-9c6b-994f1fadd83d\") " pod="openshift-infra/auto-csr-approver-29535104-r58dw" Feb 26 11:44:00 crc kubenswrapper[4699]: I0226 11:44:00.399291 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqj9x\" (UniqueName: \"kubernetes.io/projected/3a59d7ac-e643-4693-9c6b-994f1fadd83d-kube-api-access-nqj9x\") pod \"auto-csr-approver-29535104-r58dw\" (UID: \"3a59d7ac-e643-4693-9c6b-994f1fadd83d\") " pod="openshift-infra/auto-csr-approver-29535104-r58dw" Feb 26 11:44:00 crc kubenswrapper[4699]: I0226 11:44:00.420101 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqj9x\" (UniqueName: \"kubernetes.io/projected/3a59d7ac-e643-4693-9c6b-994f1fadd83d-kube-api-access-nqj9x\") pod \"auto-csr-approver-29535104-r58dw\" (UID: \"3a59d7ac-e643-4693-9c6b-994f1fadd83d\") " pod="openshift-infra/auto-csr-approver-29535104-r58dw" Feb 26 11:44:00 crc kubenswrapper[4699]: I0226 11:44:00.470909 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535104-r58dw" Feb 26 11:44:00 crc kubenswrapper[4699]: I0226 11:44:00.944392 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535104-r58dw"] Feb 26 11:44:01 crc kubenswrapper[4699]: I0226 11:44:01.719081 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535104-r58dw" event={"ID":"3a59d7ac-e643-4693-9c6b-994f1fadd83d","Type":"ContainerStarted","Data":"81727e4e4d4367b44e1f05a5ee53466b5819100a16f232ecabf7d87d7d5e9e95"} Feb 26 11:44:03 crc kubenswrapper[4699]: I0226 11:44:03.742864 4699 generic.go:334] "Generic (PLEG): container finished" podID="3a59d7ac-e643-4693-9c6b-994f1fadd83d" containerID="6dd92189791b2617628aa3e717314eb02f69fda3f8d5e7e8ceb2bcddb537435f" exitCode=0 Feb 26 11:44:03 crc kubenswrapper[4699]: I0226 11:44:03.742979 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535104-r58dw" event={"ID":"3a59d7ac-e643-4693-9c6b-994f1fadd83d","Type":"ContainerDied","Data":"6dd92189791b2617628aa3e717314eb02f69fda3f8d5e7e8ceb2bcddb537435f"} Feb 26 11:44:05 crc kubenswrapper[4699]: I0226 11:44:05.101172 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535104-r58dw" Feb 26 11:44:05 crc kubenswrapper[4699]: I0226 11:44:05.202100 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqj9x\" (UniqueName: \"kubernetes.io/projected/3a59d7ac-e643-4693-9c6b-994f1fadd83d-kube-api-access-nqj9x\") pod \"3a59d7ac-e643-4693-9c6b-994f1fadd83d\" (UID: \"3a59d7ac-e643-4693-9c6b-994f1fadd83d\") " Feb 26 11:44:05 crc kubenswrapper[4699]: I0226 11:44:05.208568 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a59d7ac-e643-4693-9c6b-994f1fadd83d-kube-api-access-nqj9x" (OuterVolumeSpecName: "kube-api-access-nqj9x") pod "3a59d7ac-e643-4693-9c6b-994f1fadd83d" (UID: "3a59d7ac-e643-4693-9c6b-994f1fadd83d"). InnerVolumeSpecName "kube-api-access-nqj9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:44:05 crc kubenswrapper[4699]: I0226 11:44:05.305088 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqj9x\" (UniqueName: \"kubernetes.io/projected/3a59d7ac-e643-4693-9c6b-994f1fadd83d-kube-api-access-nqj9x\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:05 crc kubenswrapper[4699]: I0226 11:44:05.761567 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535104-r58dw" event={"ID":"3a59d7ac-e643-4693-9c6b-994f1fadd83d","Type":"ContainerDied","Data":"81727e4e4d4367b44e1f05a5ee53466b5819100a16f232ecabf7d87d7d5e9e95"} Feb 26 11:44:05 crc kubenswrapper[4699]: I0226 11:44:05.761932 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81727e4e4d4367b44e1f05a5ee53466b5819100a16f232ecabf7d87d7d5e9e95" Feb 26 11:44:05 crc kubenswrapper[4699]: I0226 11:44:05.761614 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535104-r58dw" Feb 26 11:44:06 crc kubenswrapper[4699]: I0226 11:44:06.178411 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535098-km5z4"] Feb 26 11:44:06 crc kubenswrapper[4699]: I0226 11:44:06.189865 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535098-km5z4"] Feb 26 11:44:06 crc kubenswrapper[4699]: I0226 11:44:06.273496 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54818b28-fa0f-4021-9dc0-57f3186f3e64" path="/var/lib/kubelet/pods/54818b28-fa0f-4021-9dc0-57f3186f3e64/volumes" Feb 26 11:44:30 crc kubenswrapper[4699]: I0226 11:44:30.929224 4699 scope.go:117] "RemoveContainer" containerID="6a0914a3db1c0b6e1b3a5a9cf2e1d8ac0e44a6dc0eb35fc159954e4b3f365a3d" Feb 26 11:44:30 crc kubenswrapper[4699]: I0226 11:44:30.999716 4699 scope.go:117] "RemoveContainer" containerID="2cee4e67f7ca1be08a16734a80281eca2dc16bb5d20a6d285f430706b65292fe" Feb 26 11:44:31 crc kubenswrapper[4699]: I0226 11:44:31.047877 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-77cbz"] Feb 26 11:44:31 crc kubenswrapper[4699]: I0226 11:44:31.054412 4699 scope.go:117] "RemoveContainer" containerID="1b1986eede2e3874e8730ee539f7fe36f87c4471b7b1fdf2129756beebd0a599" Feb 26 11:44:31 crc kubenswrapper[4699]: I0226 11:44:31.058149 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-77cbz"] Feb 26 11:44:32 crc kubenswrapper[4699]: I0226 11:44:32.273367 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462a2449-2712-4bb7-9ec9-6e09a1800361" path="/var/lib/kubelet/pods/462a2449-2712-4bb7-9ec9-6e09a1800361/volumes" Feb 26 11:44:35 crc kubenswrapper[4699]: I0226 11:44:35.037571 4699 generic.go:334] "Generic (PLEG): container finished" podID="85e0d37e-fb25-4bbc-afe5-7e6ab304390c" containerID="ba3628d7d80420e0729959a1fcf9d498ca569fe68feae991747716a7c0d13fa3" exitCode=0 Feb 26 11:44:35 crc kubenswrapper[4699]: I0226 11:44:35.037678 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" event={"ID":"85e0d37e-fb25-4bbc-afe5-7e6ab304390c","Type":"ContainerDied","Data":"ba3628d7d80420e0729959a1fcf9d498ca569fe68feae991747716a7c0d13fa3"} Feb 26 11:44:36 crc kubenswrapper[4699]: I0226 11:44:36.519817 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:44:36 crc kubenswrapper[4699]: I0226 11:44:36.650171 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-ssh-key-openstack-edpm-ipam\") pod \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " Feb 26 11:44:36 crc kubenswrapper[4699]: I0226 11:44:36.650225 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv2ll\" (UniqueName: \"kubernetes.io/projected/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-kube-api-access-qv2ll\") pod \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " Feb 26 11:44:36 crc kubenswrapper[4699]: I0226 11:44:36.650273 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-inventory\") pod \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\" (UID: \"85e0d37e-fb25-4bbc-afe5-7e6ab304390c\") " Feb 26 11:44:36 crc kubenswrapper[4699]: I0226 11:44:36.657620 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-kube-api-access-qv2ll" (OuterVolumeSpecName: "kube-api-access-qv2ll") pod "85e0d37e-fb25-4bbc-afe5-7e6ab304390c" (UID: "85e0d37e-fb25-4bbc-afe5-7e6ab304390c"). InnerVolumeSpecName "kube-api-access-qv2ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:44:36 crc kubenswrapper[4699]: I0226 11:44:36.680044 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "85e0d37e-fb25-4bbc-afe5-7e6ab304390c" (UID: "85e0d37e-fb25-4bbc-afe5-7e6ab304390c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:44:36 crc kubenswrapper[4699]: I0226 11:44:36.680097 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-inventory" (OuterVolumeSpecName: "inventory") pod "85e0d37e-fb25-4bbc-afe5-7e6ab304390c" (UID: "85e0d37e-fb25-4bbc-afe5-7e6ab304390c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:44:36 crc kubenswrapper[4699]: I0226 11:44:36.752901 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:36 crc kubenswrapper[4699]: I0226 11:44:36.753198 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv2ll\" (UniqueName: \"kubernetes.io/projected/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-kube-api-access-qv2ll\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:36 crc kubenswrapper[4699]: I0226 11:44:36.753325 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e0d37e-fb25-4bbc-afe5-7e6ab304390c-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.058069 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" event={"ID":"85e0d37e-fb25-4bbc-afe5-7e6ab304390c","Type":"ContainerDied","Data":"0a2e58c697cadea58aebf86626b65bde4f82fab06f41769728b26cd2783dc764"} Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.058423 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a2e58c697cadea58aebf86626b65bde4f82fab06f41769728b26cd2783dc764" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.058183 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9q25" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.141854 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t4sjg"] Feb 26 11:44:37 crc kubenswrapper[4699]: E0226 11:44:37.142338 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a59d7ac-e643-4693-9c6b-994f1fadd83d" containerName="oc" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.142366 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a59d7ac-e643-4693-9c6b-994f1fadd83d" containerName="oc" Feb 26 11:44:37 crc kubenswrapper[4699]: E0226 11:44:37.142411 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e0d37e-fb25-4bbc-afe5-7e6ab304390c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.142428 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e0d37e-fb25-4bbc-afe5-7e6ab304390c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.142678 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a59d7ac-e643-4693-9c6b-994f1fadd83d" containerName="oc" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.142706 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e0d37e-fb25-4bbc-afe5-7e6ab304390c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.143471 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.145450 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.145852 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.145852 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.148203 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.153904 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t4sjg"] Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.262580 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2sgr\" (UniqueName: \"kubernetes.io/projected/2930a730-d5e2-49e1-a618-7428b999a73d-kube-api-access-b2sgr\") pod \"ssh-known-hosts-edpm-deployment-t4sjg\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.262696 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t4sjg\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.262766 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t4sjg\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.366058 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2sgr\" (UniqueName: \"kubernetes.io/projected/2930a730-d5e2-49e1-a618-7428b999a73d-kube-api-access-b2sgr\") pod \"ssh-known-hosts-edpm-deployment-t4sjg\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.366250 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t4sjg\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.366326 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t4sjg\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.370697 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t4sjg\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.370710 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t4sjg\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.385240 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2sgr\" (UniqueName: \"kubernetes.io/projected/2930a730-d5e2-49e1-a618-7428b999a73d-kube-api-access-b2sgr\") pod \"ssh-known-hosts-edpm-deployment-t4sjg\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:37 crc kubenswrapper[4699]: I0226 11:44:37.467553 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:38 crc kubenswrapper[4699]: I0226 11:44:38.002851 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t4sjg"] Feb 26 11:44:38 crc kubenswrapper[4699]: I0226 11:44:38.067465 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" event={"ID":"2930a730-d5e2-49e1-a618-7428b999a73d","Type":"ContainerStarted","Data":"2063f30625bff358b16eb9d11ebeaaff802901d1ca01220a33d3df5d5689163f"} Feb 26 11:44:39 crc kubenswrapper[4699]: I0226 11:44:39.108302 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" event={"ID":"2930a730-d5e2-49e1-a618-7428b999a73d","Type":"ContainerStarted","Data":"fb17c90a52d984a5a986e30c43494b9457c4431e764fdc4b4d2b63320bf412e8"} Feb 26 11:44:39 crc kubenswrapper[4699]: I0226 11:44:39.130651 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" podStartSLOduration=1.534482164 podStartE2EDuration="2.130629655s" podCreationTimestamp="2026-02-26 11:44:37 +0000 UTC" firstStartedPulling="2026-02-26 11:44:38.000601692 +0000 UTC m=+2023.811428146" lastFinishedPulling="2026-02-26 11:44:38.596749203 +0000 UTC m=+2024.407575637" observedRunningTime="2026-02-26 11:44:39.126193148 +0000 UTC m=+2024.937019572" watchObservedRunningTime="2026-02-26 11:44:39.130629655 +0000 UTC m=+2024.941456089" Feb 26 11:44:46 crc kubenswrapper[4699]: I0226 11:44:46.185026 4699 generic.go:334] "Generic (PLEG): container finished" podID="2930a730-d5e2-49e1-a618-7428b999a73d" containerID="fb17c90a52d984a5a986e30c43494b9457c4431e764fdc4b4d2b63320bf412e8" exitCode=0 Feb 26 11:44:46 crc kubenswrapper[4699]: I0226 11:44:46.185084 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" event={"ID":"2930a730-d5e2-49e1-a618-7428b999a73d","Type":"ContainerDied","Data":"fb17c90a52d984a5a986e30c43494b9457c4431e764fdc4b4d2b63320bf412e8"} Feb 26 11:44:47 crc kubenswrapper[4699]: I0226 11:44:47.603016 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:47 crc kubenswrapper[4699]: I0226 11:44:47.678675 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-ssh-key-openstack-edpm-ipam\") pod \"2930a730-d5e2-49e1-a618-7428b999a73d\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " Feb 26 11:44:47 crc kubenswrapper[4699]: I0226 11:44:47.678833 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2sgr\" (UniqueName: \"kubernetes.io/projected/2930a730-d5e2-49e1-a618-7428b999a73d-kube-api-access-b2sgr\") pod \"2930a730-d5e2-49e1-a618-7428b999a73d\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " Feb 26 11:44:47 crc kubenswrapper[4699]: I0226 11:44:47.678916 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-inventory-0\") pod \"2930a730-d5e2-49e1-a618-7428b999a73d\" (UID: \"2930a730-d5e2-49e1-a618-7428b999a73d\") " Feb 26 11:44:47 crc kubenswrapper[4699]: I0226 11:44:47.684659 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2930a730-d5e2-49e1-a618-7428b999a73d-kube-api-access-b2sgr" (OuterVolumeSpecName: "kube-api-access-b2sgr") pod "2930a730-d5e2-49e1-a618-7428b999a73d" (UID: "2930a730-d5e2-49e1-a618-7428b999a73d"). InnerVolumeSpecName "kube-api-access-b2sgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:44:47 crc kubenswrapper[4699]: I0226 11:44:47.708554 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2930a730-d5e2-49e1-a618-7428b999a73d" (UID: "2930a730-d5e2-49e1-a618-7428b999a73d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:44:47 crc kubenswrapper[4699]: I0226 11:44:47.713385 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2930a730-d5e2-49e1-a618-7428b999a73d" (UID: "2930a730-d5e2-49e1-a618-7428b999a73d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:44:47 crc kubenswrapper[4699]: I0226 11:44:47.782843 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:47 crc kubenswrapper[4699]: I0226 11:44:47.782946 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2sgr\" (UniqueName: \"kubernetes.io/projected/2930a730-d5e2-49e1-a618-7428b999a73d-kube-api-access-b2sgr\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:47 crc kubenswrapper[4699]: I0226 11:44:47.782963 4699 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2930a730-d5e2-49e1-a618-7428b999a73d-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.202007 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" event={"ID":"2930a730-d5e2-49e1-a618-7428b999a73d","Type":"ContainerDied","Data":"2063f30625bff358b16eb9d11ebeaaff802901d1ca01220a33d3df5d5689163f"} Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.202454 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2063f30625bff358b16eb9d11ebeaaff802901d1ca01220a33d3df5d5689163f" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.202090 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t4sjg" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.307040 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv"] Feb 26 11:44:48 crc kubenswrapper[4699]: E0226 11:44:48.307702 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2930a730-d5e2-49e1-a618-7428b999a73d" containerName="ssh-known-hosts-edpm-deployment" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.307723 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2930a730-d5e2-49e1-a618-7428b999a73d" containerName="ssh-known-hosts-edpm-deployment" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.307962 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="2930a730-d5e2-49e1-a618-7428b999a73d" containerName="ssh-known-hosts-edpm-deployment" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.308793 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.310869 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.310978 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.311002 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.312449 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.329454 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv"] Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.396810 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8w2tv\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.397168 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkgkx\" (UniqueName: \"kubernetes.io/projected/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-kube-api-access-xkgkx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8w2tv\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.397375 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8w2tv\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.500744 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8w2tv\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.500966 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8w2tv\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.501023 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkgkx\" (UniqueName: \"kubernetes.io/projected/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-kube-api-access-xkgkx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8w2tv\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.506017 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8w2tv\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.510703 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8w2tv\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.517366 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkgkx\" (UniqueName: \"kubernetes.io/projected/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-kube-api-access-xkgkx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8w2tv\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:48 crc kubenswrapper[4699]: I0226 11:44:48.628988 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:49 crc kubenswrapper[4699]: I0226 11:44:49.129687 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv"] Feb 26 11:44:49 crc kubenswrapper[4699]: I0226 11:44:49.210370 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" event={"ID":"96b6beba-4e99-4cb7-b49b-3f211c5e12b7","Type":"ContainerStarted","Data":"9574220a6d18084e8f19822098cf7500705998788cce5d11548c5e481341c3cc"} Feb 26 11:44:50 crc kubenswrapper[4699]: I0226 11:44:50.225337 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" event={"ID":"96b6beba-4e99-4cb7-b49b-3f211c5e12b7","Type":"ContainerStarted","Data":"77c59d81e51e69d4d4ba9639877a0bef167616ca48d8fad172b42d426051feab"} Feb 26 11:44:50 crc kubenswrapper[4699]: I0226 11:44:50.245625 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" podStartSLOduration=1.6117989640000001 podStartE2EDuration="2.245606801s" podCreationTimestamp="2026-02-26 11:44:48 +0000 UTC" firstStartedPulling="2026-02-26 11:44:49.148540439 +0000 UTC m=+2034.959366873" lastFinishedPulling="2026-02-26 11:44:49.782348276 +0000 UTC m=+2035.593174710" observedRunningTime="2026-02-26 11:44:50.242053709 +0000 UTC m=+2036.052880163" watchObservedRunningTime="2026-02-26 11:44:50.245606801 +0000 UTC m=+2036.056433235" Feb 26 11:44:58 crc kubenswrapper[4699]: I0226 11:44:58.288838 4699 generic.go:334] "Generic (PLEG): container finished" podID="96b6beba-4e99-4cb7-b49b-3f211c5e12b7" containerID="77c59d81e51e69d4d4ba9639877a0bef167616ca48d8fad172b42d426051feab" exitCode=0 Feb 26 11:44:58 crc kubenswrapper[4699]: I0226 11:44:58.288955 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" event={"ID":"96b6beba-4e99-4cb7-b49b-3f211c5e12b7","Type":"ContainerDied","Data":"77c59d81e51e69d4d4ba9639877a0bef167616ca48d8fad172b42d426051feab"} Feb 26 11:44:59 crc kubenswrapper[4699]: I0226 11:44:59.711671 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:44:59 crc kubenswrapper[4699]: I0226 11:44:59.835752 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-inventory\") pod \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " Feb 26 11:44:59 crc kubenswrapper[4699]: I0226 11:44:59.835992 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkgkx\" (UniqueName: \"kubernetes.io/projected/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-kube-api-access-xkgkx\") pod \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " Feb 26 11:44:59 crc kubenswrapper[4699]: I0226 11:44:59.836105 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-ssh-key-openstack-edpm-ipam\") pod \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\" (UID: \"96b6beba-4e99-4cb7-b49b-3f211c5e12b7\") " Feb 26 11:44:59 crc kubenswrapper[4699]: I0226 11:44:59.842306 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-kube-api-access-xkgkx" (OuterVolumeSpecName: "kube-api-access-xkgkx") pod "96b6beba-4e99-4cb7-b49b-3f211c5e12b7" (UID: "96b6beba-4e99-4cb7-b49b-3f211c5e12b7"). InnerVolumeSpecName "kube-api-access-xkgkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:44:59 crc kubenswrapper[4699]: I0226 11:44:59.862458 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96b6beba-4e99-4cb7-b49b-3f211c5e12b7" (UID: "96b6beba-4e99-4cb7-b49b-3f211c5e12b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:44:59 crc kubenswrapper[4699]: I0226 11:44:59.863389 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-inventory" (OuterVolumeSpecName: "inventory") pod "96b6beba-4e99-4cb7-b49b-3f211c5e12b7" (UID: "96b6beba-4e99-4cb7-b49b-3f211c5e12b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:44:59 crc kubenswrapper[4699]: I0226 11:44:59.938470 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkgkx\" (UniqueName: \"kubernetes.io/projected/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-kube-api-access-xkgkx\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:59 crc kubenswrapper[4699]: I0226 11:44:59.938502 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:59 crc kubenswrapper[4699]: I0226 11:44:59.938516 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96b6beba-4e99-4cb7-b49b-3f211c5e12b7-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.151780 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8"] Feb 26 11:45:00 crc kubenswrapper[4699]: E0226 11:45:00.152419 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b6beba-4e99-4cb7-b49b-3f211c5e12b7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.152442 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b6beba-4e99-4cb7-b49b-3f211c5e12b7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.152615 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b6beba-4e99-4cb7-b49b-3f211c5e12b7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.155726 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.159236 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.159693 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.184229 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8"] Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.244168 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3108c286-4671-45d5-ac60-fc5d8f4a9c17-config-volume\") pod \"collect-profiles-29535105-qcdn8\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.244242 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3108c286-4671-45d5-ac60-fc5d8f4a9c17-secret-volume\") pod \"collect-profiles-29535105-qcdn8\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.244295 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-567tp\" (UniqueName: \"kubernetes.io/projected/3108c286-4671-45d5-ac60-fc5d8f4a9c17-kube-api-access-567tp\") pod \"collect-profiles-29535105-qcdn8\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.309360 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" event={"ID":"96b6beba-4e99-4cb7-b49b-3f211c5e12b7","Type":"ContainerDied","Data":"9574220a6d18084e8f19822098cf7500705998788cce5d11548c5e481341c3cc"} Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.309839 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9574220a6d18084e8f19822098cf7500705998788cce5d11548c5e481341c3cc" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.309488 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8w2tv" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.356239 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3108c286-4671-45d5-ac60-fc5d8f4a9c17-config-volume\") pod \"collect-profiles-29535105-qcdn8\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.356397 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3108c286-4671-45d5-ac60-fc5d8f4a9c17-secret-volume\") pod \"collect-profiles-29535105-qcdn8\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.356519 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-567tp\" (UniqueName: \"kubernetes.io/projected/3108c286-4671-45d5-ac60-fc5d8f4a9c17-kube-api-access-567tp\") pod \"collect-profiles-29535105-qcdn8\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.360369 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3108c286-4671-45d5-ac60-fc5d8f4a9c17-config-volume\") pod \"collect-profiles-29535105-qcdn8\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.366210 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3108c286-4671-45d5-ac60-fc5d8f4a9c17-secret-volume\") pod \"collect-profiles-29535105-qcdn8\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.391981 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-567tp\" (UniqueName: \"kubernetes.io/projected/3108c286-4671-45d5-ac60-fc5d8f4a9c17-kube-api-access-567tp\") pod \"collect-profiles-29535105-qcdn8\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.406869 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l"] Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.408185 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.416006 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.416014 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.416884 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.417313 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l"] Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.418434 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.481634 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.561649 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjhg4\" (UniqueName: \"kubernetes.io/projected/a1aabb80-3c23-4f5a-9bd1-4d573089856c-kube-api-access-kjhg4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.561986 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.562022 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.664104 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.664181 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.664318 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjhg4\" (UniqueName: \"kubernetes.io/projected/a1aabb80-3c23-4f5a-9bd1-4d573089856c-kube-api-access-kjhg4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.669046 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.672942 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.683161 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjhg4\" (UniqueName: \"kubernetes.io/projected/a1aabb80-3c23-4f5a-9bd1-4d573089856c-kube-api-access-kjhg4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.737862 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:00 crc kubenswrapper[4699]: I0226 11:45:00.903411 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8"] Feb 26 11:45:00 crc kubenswrapper[4699]: W0226 11:45:00.909510 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3108c286_4671_45d5_ac60_fc5d8f4a9c17.slice/crio-b858b8207af0ca8569020fadec0a199634c2068e68731588f058f544f56c3b11 WatchSource:0}: Error finding container b858b8207af0ca8569020fadec0a199634c2068e68731588f058f544f56c3b11: Status 404 returned error can't find the container with id b858b8207af0ca8569020fadec0a199634c2068e68731588f058f544f56c3b11 Feb 26 11:45:01 crc kubenswrapper[4699]: I0226 11:45:01.263527 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l"] Feb 26 11:45:01 crc kubenswrapper[4699]: W0226 11:45:01.266541 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1aabb80_3c23_4f5a_9bd1_4d573089856c.slice/crio-44db8411ee307977b727b88c04835def7349e614d20428a8ac1dce69f67272db WatchSource:0}: Error finding container 44db8411ee307977b727b88c04835def7349e614d20428a8ac1dce69f67272db: Status 404 returned error can't find the container with id 44db8411ee307977b727b88c04835def7349e614d20428a8ac1dce69f67272db Feb 26 11:45:01 crc kubenswrapper[4699]: I0226 11:45:01.339038 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" event={"ID":"3108c286-4671-45d5-ac60-fc5d8f4a9c17","Type":"ContainerStarted","Data":"b858b8207af0ca8569020fadec0a199634c2068e68731588f058f544f56c3b11"} Feb 26 11:45:01 crc kubenswrapper[4699]: I0226 11:45:01.339967 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" event={"ID":"a1aabb80-3c23-4f5a-9bd1-4d573089856c","Type":"ContainerStarted","Data":"44db8411ee307977b727b88c04835def7349e614d20428a8ac1dce69f67272db"} Feb 26 11:45:02 crc kubenswrapper[4699]: I0226 11:45:02.350958 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" event={"ID":"3108c286-4671-45d5-ac60-fc5d8f4a9c17","Type":"ContainerDied","Data":"5adf826586bde3c763d1261019557f3e2b59eaf1c943090396c46b11018cc761"} Feb 26 11:45:02 crc kubenswrapper[4699]: I0226 11:45:02.350822 4699 generic.go:334] "Generic (PLEG): container finished" podID="3108c286-4671-45d5-ac60-fc5d8f4a9c17" containerID="5adf826586bde3c763d1261019557f3e2b59eaf1c943090396c46b11018cc761" exitCode=0 Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.365757 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" event={"ID":"a1aabb80-3c23-4f5a-9bd1-4d573089856c","Type":"ContainerStarted","Data":"8bafcbd10d6755dab078792a930009cce1ed58cd9190fd7aecf6ae0b9170fff3"} Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.398964 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" podStartSLOduration=2.488979303 podStartE2EDuration="3.398940449s" podCreationTimestamp="2026-02-26 11:45:00 +0000 UTC" firstStartedPulling="2026-02-26 11:45:01.270986887 +0000 UTC m=+2047.081813321" lastFinishedPulling="2026-02-26 11:45:02.180948023 +0000 UTC m=+2047.991774467" observedRunningTime="2026-02-26 11:45:03.388732087 +0000 UTC m=+2049.199558541" watchObservedRunningTime="2026-02-26 11:45:03.398940449 +0000 UTC m=+2049.209766883" Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.677952 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.831864 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3108c286-4671-45d5-ac60-fc5d8f4a9c17-config-volume\") pod \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.831924 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3108c286-4671-45d5-ac60-fc5d8f4a9c17-secret-volume\") pod \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.832016 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-567tp\" (UniqueName: \"kubernetes.io/projected/3108c286-4671-45d5-ac60-fc5d8f4a9c17-kube-api-access-567tp\") pod \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\" (UID: \"3108c286-4671-45d5-ac60-fc5d8f4a9c17\") " Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.833257 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3108c286-4671-45d5-ac60-fc5d8f4a9c17-config-volume" (OuterVolumeSpecName: "config-volume") pod "3108c286-4671-45d5-ac60-fc5d8f4a9c17" (UID: "3108c286-4671-45d5-ac60-fc5d8f4a9c17"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.838655 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3108c286-4671-45d5-ac60-fc5d8f4a9c17-kube-api-access-567tp" (OuterVolumeSpecName: "kube-api-access-567tp") pod "3108c286-4671-45d5-ac60-fc5d8f4a9c17" (UID: "3108c286-4671-45d5-ac60-fc5d8f4a9c17"). InnerVolumeSpecName "kube-api-access-567tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.852538 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3108c286-4671-45d5-ac60-fc5d8f4a9c17-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3108c286-4671-45d5-ac60-fc5d8f4a9c17" (UID: "3108c286-4671-45d5-ac60-fc5d8f4a9c17"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.935022 4699 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3108c286-4671-45d5-ac60-fc5d8f4a9c17-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.935067 4699 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3108c286-4671-45d5-ac60-fc5d8f4a9c17-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:03 crc kubenswrapper[4699]: I0226 11:45:03.935080 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-567tp\" (UniqueName: \"kubernetes.io/projected/3108c286-4671-45d5-ac60-fc5d8f4a9c17-kube-api-access-567tp\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:04 crc kubenswrapper[4699]: I0226 11:45:04.379591 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" event={"ID":"3108c286-4671-45d5-ac60-fc5d8f4a9c17","Type":"ContainerDied","Data":"b858b8207af0ca8569020fadec0a199634c2068e68731588f058f544f56c3b11"} Feb 26 11:45:04 crc kubenswrapper[4699]: I0226 11:45:04.380025 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b858b8207af0ca8569020fadec0a199634c2068e68731588f058f544f56c3b11" Feb 26 11:45:04 crc kubenswrapper[4699]: I0226 11:45:04.379616 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-qcdn8" Feb 26 11:45:04 crc kubenswrapper[4699]: I0226 11:45:04.764337 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd"] Feb 26 11:45:04 crc kubenswrapper[4699]: I0226 11:45:04.773511 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535060-f97rd"] Feb 26 11:45:06 crc kubenswrapper[4699]: I0226 11:45:06.273226 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8a28b8-c47b-4288-877f-8e90a3b581b5" path="/var/lib/kubelet/pods/5f8a28b8-c47b-4288-877f-8e90a3b581b5/volumes" Feb 26 11:45:11 crc kubenswrapper[4699]: I0226 11:45:11.585272 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:45:11 crc kubenswrapper[4699]: I0226 11:45:11.585829 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:45:13 crc kubenswrapper[4699]: E0226 11:45:13.475474 4699 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.215s" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.475852 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x2xcg"] Feb 26 11:45:13 crc kubenswrapper[4699]: E0226 11:45:13.476215 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3108c286-4671-45d5-ac60-fc5d8f4a9c17" containerName="collect-profiles" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.476229 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="3108c286-4671-45d5-ac60-fc5d8f4a9c17" containerName="collect-profiles" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.476424 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="3108c286-4671-45d5-ac60-fc5d8f4a9c17" containerName="collect-profiles" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.478066 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.483106 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2xcg"] Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.503097 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-utilities\") pod \"redhat-marketplace-x2xcg\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.503295 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-catalog-content\") pod \"redhat-marketplace-x2xcg\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.503384 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg8xl\" (UniqueName: \"kubernetes.io/projected/d945a7c6-8e43-4dae-8521-e5e8b04f612d-kube-api-access-mg8xl\") pod \"redhat-marketplace-x2xcg\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.607161 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-utilities\") pod \"redhat-marketplace-x2xcg\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.607693 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-utilities\") pod \"redhat-marketplace-x2xcg\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.607900 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-catalog-content\") pod \"redhat-marketplace-x2xcg\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.607969 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg8xl\" (UniqueName: \"kubernetes.io/projected/d945a7c6-8e43-4dae-8521-e5e8b04f612d-kube-api-access-mg8xl\") pod \"redhat-marketplace-x2xcg\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.608585 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-catalog-content\") pod \"redhat-marketplace-x2xcg\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.634979 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg8xl\" (UniqueName: \"kubernetes.io/projected/d945a7c6-8e43-4dae-8521-e5e8b04f612d-kube-api-access-mg8xl\") pod \"redhat-marketplace-x2xcg\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:13 crc kubenswrapper[4699]: I0226 11:45:13.822220 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:14 crc kubenswrapper[4699]: I0226 11:45:14.291616 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2xcg"] Feb 26 11:45:14 crc kubenswrapper[4699]: I0226 11:45:14.486496 4699 generic.go:334] "Generic (PLEG): container finished" podID="a1aabb80-3c23-4f5a-9bd1-4d573089856c" containerID="8bafcbd10d6755dab078792a930009cce1ed58cd9190fd7aecf6ae0b9170fff3" exitCode=0 Feb 26 11:45:14 crc kubenswrapper[4699]: I0226 11:45:14.486587 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" event={"ID":"a1aabb80-3c23-4f5a-9bd1-4d573089856c","Type":"ContainerDied","Data":"8bafcbd10d6755dab078792a930009cce1ed58cd9190fd7aecf6ae0b9170fff3"} Feb 26 11:45:14 crc kubenswrapper[4699]: I0226 11:45:14.487800 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2xcg" event={"ID":"d945a7c6-8e43-4dae-8521-e5e8b04f612d","Type":"ContainerStarted","Data":"1369ca8ad6f82a788fe86943ed6a84b06e1a5107f0a845225dd71243c41f0ef7"} Feb 26 11:45:15 crc kubenswrapper[4699]: I0226 11:45:15.504683 4699 generic.go:334] "Generic (PLEG): container finished" podID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" containerID="4cf3d7d09ae2ea0a26c0849df1b9968db98d3306fdbbd4dc77bd87b713945f7a" exitCode=0 Feb 26 11:45:15 crc kubenswrapper[4699]: I0226 11:45:15.505241 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2xcg" event={"ID":"d945a7c6-8e43-4dae-8521-e5e8b04f612d","Type":"ContainerDied","Data":"4cf3d7d09ae2ea0a26c0849df1b9968db98d3306fdbbd4dc77bd87b713945f7a"} Feb 26 11:45:15 crc kubenswrapper[4699]: I0226 11:45:15.987298 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.160990 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjhg4\" (UniqueName: \"kubernetes.io/projected/a1aabb80-3c23-4f5a-9bd1-4d573089856c-kube-api-access-kjhg4\") pod \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.161055 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-ssh-key-openstack-edpm-ipam\") pod \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.161146 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-inventory\") pod \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\" (UID: \"a1aabb80-3c23-4f5a-9bd1-4d573089856c\") " Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.175614 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1aabb80-3c23-4f5a-9bd1-4d573089856c-kube-api-access-kjhg4" (OuterVolumeSpecName: "kube-api-access-kjhg4") pod "a1aabb80-3c23-4f5a-9bd1-4d573089856c" (UID: "a1aabb80-3c23-4f5a-9bd1-4d573089856c"). InnerVolumeSpecName "kube-api-access-kjhg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.186752 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a1aabb80-3c23-4f5a-9bd1-4d573089856c" (UID: "a1aabb80-3c23-4f5a-9bd1-4d573089856c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.194394 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-inventory" (OuterVolumeSpecName: "inventory") pod "a1aabb80-3c23-4f5a-9bd1-4d573089856c" (UID: "a1aabb80-3c23-4f5a-9bd1-4d573089856c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.263569 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjhg4\" (UniqueName: \"kubernetes.io/projected/a1aabb80-3c23-4f5a-9bd1-4d573089856c-kube-api-access-kjhg4\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.263623 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.263641 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1aabb80-3c23-4f5a-9bd1-4d573089856c-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.478747 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cxw5q"] Feb 26 11:45:16 crc kubenswrapper[4699]: E0226 11:45:16.479821 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1aabb80-3c23-4f5a-9bd1-4d573089856c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.479849 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1aabb80-3c23-4f5a-9bd1-4d573089856c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.480036 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1aabb80-3c23-4f5a-9bd1-4d573089856c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.481603 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.489604 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxw5q"] Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.529565 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" event={"ID":"a1aabb80-3c23-4f5a-9bd1-4d573089856c","Type":"ContainerDied","Data":"44db8411ee307977b727b88c04835def7349e614d20428a8ac1dce69f67272db"} Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.529619 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44db8411ee307977b727b88c04835def7349e614d20428a8ac1dce69f67272db" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.529706 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.539545 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2xcg" event={"ID":"d945a7c6-8e43-4dae-8521-e5e8b04f612d","Type":"ContainerStarted","Data":"aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6"} Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.672161 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-utilities\") pod \"certified-operators-cxw5q\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.672330 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb7bf\" (UniqueName: \"kubernetes.io/projected/b82d852b-9054-4ee4-96b8-36f007b257f3-kube-api-access-jb7bf\") pod \"certified-operators-cxw5q\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.672444 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-catalog-content\") pod \"certified-operators-cxw5q\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.674834 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv"] Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.676324 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.678774 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.678921 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.680441 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.680772 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.680953 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.681106 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.681277 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.681466 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.722986 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv"] Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774150 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774203 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774232 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774276 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774330 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb7bf\" (UniqueName: \"kubernetes.io/projected/b82d852b-9054-4ee4-96b8-36f007b257f3-kube-api-access-jb7bf\") pod \"certified-operators-cxw5q\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774372 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774417 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-catalog-content\") pod \"certified-operators-cxw5q\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774438 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774463 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774510 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774535 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774555 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-utilities\") pod \"certified-operators-cxw5q\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774588 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774608 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjnhg\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-kube-api-access-cjnhg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774640 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774665 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774681 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774924 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-catalog-content\") pod \"certified-operators-cxw5q\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.774984 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-utilities\") pod \"certified-operators-cxw5q\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.793832 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb7bf\" (UniqueName: \"kubernetes.io/projected/b82d852b-9054-4ee4-96b8-36f007b257f3-kube-api-access-jb7bf\") pod \"certified-operators-cxw5q\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.803416 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876510 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876584 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjnhg\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-kube-api-access-cjnhg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876622 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876657 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876685 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876718 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876742 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876772 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876804 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876873 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876934 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.876957 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.877017 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.877041 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.881252 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.882643 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.883311 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.883793 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.891247 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.896044 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.898701 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.900305 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.898718 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.900745 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.901915 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.904710 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjnhg\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-kube-api-access-cjnhg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.906931 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:16 crc kubenswrapper[4699]: I0226 11:45:16.956934 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:17 crc kubenswrapper[4699]: I0226 11:45:17.001803 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:17 crc kubenswrapper[4699]: E0226 11:45:17.072824 4699 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd945a7c6_8e43_4dae_8521_e5e8b04f612d.slice/crio-conmon-aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6.scope\": RecentStats: unable to find data in memory cache]" Feb 26 11:45:17 crc kubenswrapper[4699]: I0226 11:45:17.269653 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxw5q"] Feb 26 11:45:17 crc kubenswrapper[4699]: I0226 11:45:17.429562 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv"] Feb 26 11:45:17 crc kubenswrapper[4699]: I0226 11:45:17.549151 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" event={"ID":"e537c30c-dc6b-406f-bb86-5540ebd8a36d","Type":"ContainerStarted","Data":"d402e0aae0335e59241cdcb945195a8b0e0b32ee1dcab2f8a43f80904ff391a8"} Feb 26 11:45:17 crc kubenswrapper[4699]: I0226 11:45:17.554271 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxw5q" event={"ID":"b82d852b-9054-4ee4-96b8-36f007b257f3","Type":"ContainerStarted","Data":"e2dda033de4ce7f20b34713e48e734282d1b4f963bc9f0f967f7a0138dec93ca"} Feb 26 11:45:17 crc kubenswrapper[4699]: I0226 11:45:17.560044 4699 generic.go:334] "Generic (PLEG): container finished" podID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" containerID="aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6" exitCode=0 Feb 26 11:45:17 crc kubenswrapper[4699]: I0226 11:45:17.560091 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2xcg" event={"ID":"d945a7c6-8e43-4dae-8521-e5e8b04f612d","Type":"ContainerDied","Data":"aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6"} Feb 26 11:45:18 crc kubenswrapper[4699]: I0226 11:45:18.569899 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2xcg" event={"ID":"d945a7c6-8e43-4dae-8521-e5e8b04f612d","Type":"ContainerStarted","Data":"caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a"} Feb 26 11:45:18 crc kubenswrapper[4699]: I0226 11:45:18.580298 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" event={"ID":"e537c30c-dc6b-406f-bb86-5540ebd8a36d","Type":"ContainerStarted","Data":"61635d559717b9b7a130fef3ce5799ea8bf06b1611ecc7c36061490fa0b8373e"} Feb 26 11:45:18 crc kubenswrapper[4699]: I0226 11:45:18.585735 4699 generic.go:334] "Generic (PLEG): container finished" podID="b82d852b-9054-4ee4-96b8-36f007b257f3" containerID="005da2508f8e2e44e7242ce1462ad8bb033f8eace2f1fa0bfabd16956b3b2978" exitCode=0 Feb 26 11:45:18 crc kubenswrapper[4699]: I0226 11:45:18.585807 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxw5q" event={"ID":"b82d852b-9054-4ee4-96b8-36f007b257f3","Type":"ContainerDied","Data":"005da2508f8e2e44e7242ce1462ad8bb033f8eace2f1fa0bfabd16956b3b2978"} Feb 26 11:45:18 crc kubenswrapper[4699]: I0226 11:45:18.602667 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x2xcg" podStartSLOduration=3.166418045 podStartE2EDuration="5.602640123s" podCreationTimestamp="2026-02-26 11:45:13 +0000 UTC" firstStartedPulling="2026-02-26 11:45:15.508676654 +0000 UTC m=+2061.319503128" lastFinishedPulling="2026-02-26 11:45:17.944898762 +0000 UTC m=+2063.755725206" observedRunningTime="2026-02-26 11:45:18.59486358 +0000 UTC m=+2064.405690044" watchObservedRunningTime="2026-02-26 11:45:18.602640123 +0000 UTC m=+2064.413466557" Feb 26 11:45:18 crc kubenswrapper[4699]: I0226 11:45:18.622089 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" podStartSLOduration=2.240843316 podStartE2EDuration="2.622051207s" podCreationTimestamp="2026-02-26 11:45:16 +0000 UTC" firstStartedPulling="2026-02-26 11:45:17.436768385 +0000 UTC m=+2063.247594819" lastFinishedPulling="2026-02-26 11:45:17.817976276 +0000 UTC m=+2063.628802710" observedRunningTime="2026-02-26 11:45:18.612573626 +0000 UTC m=+2064.423400090" watchObservedRunningTime="2026-02-26 11:45:18.622051207 +0000 UTC m=+2064.432877641" Feb 26 11:45:20 crc kubenswrapper[4699]: I0226 11:45:20.632034 4699 generic.go:334] "Generic (PLEG): container finished" podID="b82d852b-9054-4ee4-96b8-36f007b257f3" containerID="997ec4ad763969aa9a44f4675ac014308b3c270558206c207a14b13763bab310" exitCode=0 Feb 26 11:45:20 crc kubenswrapper[4699]: I0226 11:45:20.632167 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxw5q" event={"ID":"b82d852b-9054-4ee4-96b8-36f007b257f3","Type":"ContainerDied","Data":"997ec4ad763969aa9a44f4675ac014308b3c270558206c207a14b13763bab310"} Feb 26 11:45:21 crc kubenswrapper[4699]: I0226 11:45:21.642336 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxw5q" event={"ID":"b82d852b-9054-4ee4-96b8-36f007b257f3","Type":"ContainerStarted","Data":"a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01"} Feb 26 11:45:21 crc kubenswrapper[4699]: I0226 11:45:21.663264 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cxw5q" podStartSLOduration=2.954975199 podStartE2EDuration="5.663240679s" podCreationTimestamp="2026-02-26 11:45:16 +0000 UTC" firstStartedPulling="2026-02-26 11:45:18.588645073 +0000 UTC m=+2064.399471517" lastFinishedPulling="2026-02-26 11:45:21.296910563 +0000 UTC m=+2067.107736997" observedRunningTime="2026-02-26 11:45:21.660518381 +0000 UTC m=+2067.471344845" watchObservedRunningTime="2026-02-26 11:45:21.663240679 +0000 UTC m=+2067.474067123" Feb 26 11:45:23 crc kubenswrapper[4699]: I0226 11:45:23.822101 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:23 crc kubenswrapper[4699]: I0226 11:45:23.822442 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:23 crc kubenswrapper[4699]: I0226 11:45:23.878359 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:24 crc kubenswrapper[4699]: I0226 11:45:24.722663 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:26 crc kubenswrapper[4699]: I0226 11:45:26.066318 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2xcg"] Feb 26 11:45:26 crc kubenswrapper[4699]: I0226 11:45:26.804213 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:26 crc kubenswrapper[4699]: I0226 11:45:26.804668 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:26 crc kubenswrapper[4699]: I0226 11:45:26.855095 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:27 crc kubenswrapper[4699]: I0226 11:45:27.693886 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x2xcg" podUID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" containerName="registry-server" containerID="cri-o://caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a" gracePeriod=2 Feb 26 11:45:27 crc kubenswrapper[4699]: I0226 11:45:27.740442 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.130634 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.215353 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-utilities\") pod \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.215526 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-catalog-content\") pod \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.215610 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg8xl\" (UniqueName: \"kubernetes.io/projected/d945a7c6-8e43-4dae-8521-e5e8b04f612d-kube-api-access-mg8xl\") pod \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\" (UID: \"d945a7c6-8e43-4dae-8521-e5e8b04f612d\") " Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.216200 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-utilities" (OuterVolumeSpecName: "utilities") pod "d945a7c6-8e43-4dae-8521-e5e8b04f612d" (UID: "d945a7c6-8e43-4dae-8521-e5e8b04f612d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.223310 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d945a7c6-8e43-4dae-8521-e5e8b04f612d-kube-api-access-mg8xl" (OuterVolumeSpecName: "kube-api-access-mg8xl") pod "d945a7c6-8e43-4dae-8521-e5e8b04f612d" (UID: "d945a7c6-8e43-4dae-8521-e5e8b04f612d"). InnerVolumeSpecName "kube-api-access-mg8xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.237390 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d945a7c6-8e43-4dae-8521-e5e8b04f612d" (UID: "d945a7c6-8e43-4dae-8521-e5e8b04f612d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.317896 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.317926 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d945a7c6-8e43-4dae-8521-e5e8b04f612d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.317936 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg8xl\" (UniqueName: \"kubernetes.io/projected/d945a7c6-8e43-4dae-8521-e5e8b04f612d-kube-api-access-mg8xl\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.703935 4699 generic.go:334] "Generic (PLEG): container finished" podID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" containerID="caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a" exitCode=0 Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.704033 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2xcg" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.704077 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2xcg" event={"ID":"d945a7c6-8e43-4dae-8521-e5e8b04f612d","Type":"ContainerDied","Data":"caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a"} Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.704450 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2xcg" event={"ID":"d945a7c6-8e43-4dae-8521-e5e8b04f612d","Type":"ContainerDied","Data":"1369ca8ad6f82a788fe86943ed6a84b06e1a5107f0a845225dd71243c41f0ef7"} Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.704486 4699 scope.go:117] "RemoveContainer" containerID="caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.732639 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2xcg"] Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.733915 4699 scope.go:117] "RemoveContainer" containerID="aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.740779 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2xcg"] Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.756774 4699 scope.go:117] "RemoveContainer" containerID="4cf3d7d09ae2ea0a26c0849df1b9968db98d3306fdbbd4dc77bd87b713945f7a" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.795738 4699 scope.go:117] "RemoveContainer" containerID="caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a" Feb 26 11:45:28 crc kubenswrapper[4699]: E0226 11:45:28.796177 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a\": container with ID starting with caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a not found: ID does not exist" containerID="caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.796220 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a"} err="failed to get container status \"caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a\": rpc error: code = NotFound desc = could not find container \"caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a\": container with ID starting with caf977fa548550b2ff07142c0cf5642b71f889e64d844100198dfaac448f0f9a not found: ID does not exist" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.796246 4699 scope.go:117] "RemoveContainer" containerID="aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6" Feb 26 11:45:28 crc kubenswrapper[4699]: E0226 11:45:28.796691 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6\": container with ID starting with aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6 not found: ID does not exist" containerID="aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.796752 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6"} err="failed to get container status \"aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6\": rpc error: code = NotFound desc = could not find container \"aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6\": container with ID starting with aec8d60c34d09df244e1612b82212b1617de26a8807d6e9649ca7c202efe59a6 not found: ID does not exist" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.796790 4699 scope.go:117] "RemoveContainer" containerID="4cf3d7d09ae2ea0a26c0849df1b9968db98d3306fdbbd4dc77bd87b713945f7a" Feb 26 11:45:28 crc kubenswrapper[4699]: E0226 11:45:28.797087 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cf3d7d09ae2ea0a26c0849df1b9968db98d3306fdbbd4dc77bd87b713945f7a\": container with ID starting with 4cf3d7d09ae2ea0a26c0849df1b9968db98d3306fdbbd4dc77bd87b713945f7a not found: ID does not exist" containerID="4cf3d7d09ae2ea0a26c0849df1b9968db98d3306fdbbd4dc77bd87b713945f7a" Feb 26 11:45:28 crc kubenswrapper[4699]: I0226 11:45:28.797108 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cf3d7d09ae2ea0a26c0849df1b9968db98d3306fdbbd4dc77bd87b713945f7a"} err="failed to get container status \"4cf3d7d09ae2ea0a26c0849df1b9968db98d3306fdbbd4dc77bd87b713945f7a\": rpc error: code = NotFound desc = could not find container \"4cf3d7d09ae2ea0a26c0849df1b9968db98d3306fdbbd4dc77bd87b713945f7a\": container with ID starting with 4cf3d7d09ae2ea0a26c0849df1b9968db98d3306fdbbd4dc77bd87b713945f7a not found: ID does not exist" Feb 26 11:45:29 crc kubenswrapper[4699]: I0226 11:45:29.266383 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxw5q"] Feb 26 11:45:30 crc kubenswrapper[4699]: I0226 11:45:30.271427 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" path="/var/lib/kubelet/pods/d945a7c6-8e43-4dae-8521-e5e8b04f612d/volumes" Feb 26 11:45:30 crc kubenswrapper[4699]: I0226 11:45:30.725245 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cxw5q" podUID="b82d852b-9054-4ee4-96b8-36f007b257f3" containerName="registry-server" containerID="cri-o://a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01" gracePeriod=2 Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.147597 4699 scope.go:117] "RemoveContainer" containerID="c08f0ffa53e77347fd581c677192ce80109e73083d1caad9bb7251a920a34172" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.157695 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.206129 4699 scope.go:117] "RemoveContainer" containerID="61a2c48ee6bf74ea4766fbbb38a98752e4fc1a270493117d88d14b6af7b2c988" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.270862 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-utilities\") pod \"b82d852b-9054-4ee4-96b8-36f007b257f3\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.270960 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb7bf\" (UniqueName: \"kubernetes.io/projected/b82d852b-9054-4ee4-96b8-36f007b257f3-kube-api-access-jb7bf\") pod \"b82d852b-9054-4ee4-96b8-36f007b257f3\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.271153 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-catalog-content\") pod \"b82d852b-9054-4ee4-96b8-36f007b257f3\" (UID: \"b82d852b-9054-4ee4-96b8-36f007b257f3\") " Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.271864 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-utilities" (OuterVolumeSpecName: "utilities") pod "b82d852b-9054-4ee4-96b8-36f007b257f3" (UID: "b82d852b-9054-4ee4-96b8-36f007b257f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.278293 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82d852b-9054-4ee4-96b8-36f007b257f3-kube-api-access-jb7bf" (OuterVolumeSpecName: "kube-api-access-jb7bf") pod "b82d852b-9054-4ee4-96b8-36f007b257f3" (UID: "b82d852b-9054-4ee4-96b8-36f007b257f3"). InnerVolumeSpecName "kube-api-access-jb7bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.323603 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b82d852b-9054-4ee4-96b8-36f007b257f3" (UID: "b82d852b-9054-4ee4-96b8-36f007b257f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.373756 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.373788 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb7bf\" (UniqueName: \"kubernetes.io/projected/b82d852b-9054-4ee4-96b8-36f007b257f3-kube-api-access-jb7bf\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.373823 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82d852b-9054-4ee4-96b8-36f007b257f3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.736933 4699 generic.go:334] "Generic (PLEG): container finished" podID="b82d852b-9054-4ee4-96b8-36f007b257f3" containerID="a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01" exitCode=0 Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.736988 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxw5q" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.737002 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxw5q" event={"ID":"b82d852b-9054-4ee4-96b8-36f007b257f3","Type":"ContainerDied","Data":"a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01"} Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.737191 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxw5q" event={"ID":"b82d852b-9054-4ee4-96b8-36f007b257f3","Type":"ContainerDied","Data":"e2dda033de4ce7f20b34713e48e734282d1b4f963bc9f0f967f7a0138dec93ca"} Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.737228 4699 scope.go:117] "RemoveContainer" containerID="a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.754713 4699 scope.go:117] "RemoveContainer" containerID="997ec4ad763969aa9a44f4675ac014308b3c270558206c207a14b13763bab310" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.773943 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxw5q"] Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.779278 4699 scope.go:117] "RemoveContainer" containerID="005da2508f8e2e44e7242ce1462ad8bb033f8eace2f1fa0bfabd16956b3b2978" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.784809 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cxw5q"] Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.799163 4699 scope.go:117] "RemoveContainer" containerID="a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01" Feb 26 11:45:31 crc kubenswrapper[4699]: E0226 11:45:31.799636 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01\": container with ID starting with a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01 not found: ID does not exist" containerID="a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.799675 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01"} err="failed to get container status \"a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01\": rpc error: code = NotFound desc = could not find container \"a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01\": container with ID starting with a376c7d1e53f63537ecffcad45159d60d5f1146fdbc4b44ddb10c3779c4f9e01 not found: ID does not exist" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.799701 4699 scope.go:117] "RemoveContainer" containerID="997ec4ad763969aa9a44f4675ac014308b3c270558206c207a14b13763bab310" Feb 26 11:45:31 crc kubenswrapper[4699]: E0226 11:45:31.800094 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"997ec4ad763969aa9a44f4675ac014308b3c270558206c207a14b13763bab310\": container with ID starting with 997ec4ad763969aa9a44f4675ac014308b3c270558206c207a14b13763bab310 not found: ID does not exist" containerID="997ec4ad763969aa9a44f4675ac014308b3c270558206c207a14b13763bab310" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.800163 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"997ec4ad763969aa9a44f4675ac014308b3c270558206c207a14b13763bab310"} err="failed to get container status \"997ec4ad763969aa9a44f4675ac014308b3c270558206c207a14b13763bab310\": rpc error: code = NotFound desc = could not find container \"997ec4ad763969aa9a44f4675ac014308b3c270558206c207a14b13763bab310\": container with ID starting with 997ec4ad763969aa9a44f4675ac014308b3c270558206c207a14b13763bab310 not found: ID does not exist" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.800177 4699 scope.go:117] "RemoveContainer" containerID="005da2508f8e2e44e7242ce1462ad8bb033f8eace2f1fa0bfabd16956b3b2978" Feb 26 11:45:31 crc kubenswrapper[4699]: E0226 11:45:31.800697 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"005da2508f8e2e44e7242ce1462ad8bb033f8eace2f1fa0bfabd16956b3b2978\": container with ID starting with 005da2508f8e2e44e7242ce1462ad8bb033f8eace2f1fa0bfabd16956b3b2978 not found: ID does not exist" containerID="005da2508f8e2e44e7242ce1462ad8bb033f8eace2f1fa0bfabd16956b3b2978" Feb 26 11:45:31 crc kubenswrapper[4699]: I0226 11:45:31.800737 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"005da2508f8e2e44e7242ce1462ad8bb033f8eace2f1fa0bfabd16956b3b2978"} err="failed to get container status \"005da2508f8e2e44e7242ce1462ad8bb033f8eace2f1fa0bfabd16956b3b2978\": rpc error: code = NotFound desc = could not find container \"005da2508f8e2e44e7242ce1462ad8bb033f8eace2f1fa0bfabd16956b3b2978\": container with ID starting with 005da2508f8e2e44e7242ce1462ad8bb033f8eace2f1fa0bfabd16956b3b2978 not found: ID does not exist" Feb 26 11:45:32 crc kubenswrapper[4699]: I0226 11:45:32.273698 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82d852b-9054-4ee4-96b8-36f007b257f3" path="/var/lib/kubelet/pods/b82d852b-9054-4ee4-96b8-36f007b257f3/volumes" Feb 26 11:45:41 crc kubenswrapper[4699]: I0226 11:45:41.584855 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:45:41 crc kubenswrapper[4699]: I0226 11:45:41.585494 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:45:53 crc kubenswrapper[4699]: I0226 11:45:53.946618 4699 generic.go:334] "Generic (PLEG): container finished" podID="e537c30c-dc6b-406f-bb86-5540ebd8a36d" containerID="61635d559717b9b7a130fef3ce5799ea8bf06b1611ecc7c36061490fa0b8373e" exitCode=0 Feb 26 11:45:53 crc kubenswrapper[4699]: I0226 11:45:53.946745 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" event={"ID":"e537c30c-dc6b-406f-bb86-5540ebd8a36d","Type":"ContainerDied","Data":"61635d559717b9b7a130fef3ce5799ea8bf06b1611ecc7c36061490fa0b8373e"} Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.310900 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.471631 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-repo-setup-combined-ca-bundle\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.471728 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-bootstrap-combined-ca-bundle\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.471836 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjnhg\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-kube-api-access-cjnhg\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.471872 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-libvirt-combined-ca-bundle\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.471916 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ovn-combined-ca-bundle\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.471966 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.472018 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.472058 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.472095 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.472178 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ssh-key-openstack-edpm-ipam\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.472209 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-nova-combined-ca-bundle\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.472242 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-inventory\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.472284 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-neutron-metadata-combined-ca-bundle\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.472315 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-telemetry-combined-ca-bundle\") pod \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\" (UID: \"e537c30c-dc6b-406f-bb86-5540ebd8a36d\") " Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.477511 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.478361 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-kube-api-access-cjnhg" (OuterVolumeSpecName: "kube-api-access-cjnhg") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "kube-api-access-cjnhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.479760 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.480142 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.481145 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.481644 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.482022 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.482063 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.482085 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.482129 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.482487 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.483833 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.505937 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.507506 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-inventory" (OuterVolumeSpecName: "inventory") pod "e537c30c-dc6b-406f-bb86-5540ebd8a36d" (UID: "e537c30c-dc6b-406f-bb86-5540ebd8a36d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575002 4699 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575047 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjnhg\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-kube-api-access-cjnhg\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575059 4699 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575071 4699 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575086 4699 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575104 4699 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575135 4699 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575152 4699 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e537c30c-dc6b-406f-bb86-5540ebd8a36d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575164 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575175 4699 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575186 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575196 4699 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575207 4699 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.575218 4699 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e537c30c-dc6b-406f-bb86-5540ebd8a36d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.966989 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" event={"ID":"e537c30c-dc6b-406f-bb86-5540ebd8a36d","Type":"ContainerDied","Data":"d402e0aae0335e59241cdcb945195a8b0e0b32ee1dcab2f8a43f80904ff391a8"} Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.967076 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d402e0aae0335e59241cdcb945195a8b0e0b32ee1dcab2f8a43f80904ff391a8" Feb 26 11:45:55 crc kubenswrapper[4699]: I0226 11:45:55.967137 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.051630 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg"] Feb 26 11:45:56 crc kubenswrapper[4699]: E0226 11:45:56.052515 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82d852b-9054-4ee4-96b8-36f007b257f3" containerName="registry-server" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.052540 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82d852b-9054-4ee4-96b8-36f007b257f3" containerName="registry-server" Feb 26 11:45:56 crc kubenswrapper[4699]: E0226 11:45:56.052562 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" containerName="extract-content" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.052571 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" containerName="extract-content" Feb 26 11:45:56 crc kubenswrapper[4699]: E0226 11:45:56.052609 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82d852b-9054-4ee4-96b8-36f007b257f3" containerName="extract-content" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.052619 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82d852b-9054-4ee4-96b8-36f007b257f3" containerName="extract-content" Feb 26 11:45:56 crc kubenswrapper[4699]: E0226 11:45:56.052629 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82d852b-9054-4ee4-96b8-36f007b257f3" containerName="extract-utilities" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.052637 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82d852b-9054-4ee4-96b8-36f007b257f3" containerName="extract-utilities" Feb 26 11:45:56 crc kubenswrapper[4699]: E0226 11:45:56.052661 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" containerName="extract-utilities" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.052669 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" containerName="extract-utilities" Feb 26 11:45:56 crc kubenswrapper[4699]: E0226 11:45:56.052685 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e537c30c-dc6b-406f-bb86-5540ebd8a36d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.052694 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e537c30c-dc6b-406f-bb86-5540ebd8a36d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:56 crc kubenswrapper[4699]: E0226 11:45:56.052704 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" containerName="registry-server" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.052712 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" containerName="registry-server" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.052929 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e537c30c-dc6b-406f-bb86-5540ebd8a36d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.052947 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="d945a7c6-8e43-4dae-8521-e5e8b04f612d" containerName="registry-server" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.052977 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82d852b-9054-4ee4-96b8-36f007b257f3" containerName="registry-server" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.053857 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.056421 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.056615 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.056795 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.056865 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.056815 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.068426 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg"] Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.087629 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.087772 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.087890 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sn7j\" (UniqueName: \"kubernetes.io/projected/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-kube-api-access-7sn7j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.087955 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.088080 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.189623 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.189685 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sn7j\" (UniqueName: \"kubernetes.io/projected/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-kube-api-access-7sn7j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.189717 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.189806 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.189940 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.193178 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.193277 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.193720 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.201455 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.202843 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.205509 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.205959 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.209044 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sn7j\" (UniqueName: \"kubernetes.io/projected/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-kube-api-access-7sn7j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hmpqg\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.379017 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.389406 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.885646 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg"] Feb 26 11:45:56 crc kubenswrapper[4699]: I0226 11:45:56.979562 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" event={"ID":"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b","Type":"ContainerStarted","Data":"6fec481dc036c5b83368c4004e881ded73cc4b939fc8a01a5a352115b40fddcc"} Feb 26 11:45:57 crc kubenswrapper[4699]: I0226 11:45:57.589927 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:45:57 crc kubenswrapper[4699]: I0226 11:45:57.992014 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" event={"ID":"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b","Type":"ContainerStarted","Data":"ee40b4b1a8d8cb5e3af0d9816425a6ede7800a1df3aca66053e669a22650ea0b"} Feb 26 11:45:58 crc kubenswrapper[4699]: I0226 11:45:58.027675 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" podStartSLOduration=1.328654826 podStartE2EDuration="2.027458839s" podCreationTimestamp="2026-02-26 11:45:56 +0000 UTC" firstStartedPulling="2026-02-26 11:45:56.887797948 +0000 UTC m=+2102.698624372" lastFinishedPulling="2026-02-26 11:45:57.586601951 +0000 UTC m=+2103.397428385" observedRunningTime="2026-02-26 11:45:58.017681391 +0000 UTC m=+2103.828507835" watchObservedRunningTime="2026-02-26 11:45:58.027458839 +0000 UTC m=+2103.838285283" Feb 26 11:46:00 crc kubenswrapper[4699]: I0226 11:46:00.131255 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535106-cv2s5"] Feb 26 11:46:00 crc kubenswrapper[4699]: I0226 11:46:00.132881 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535106-cv2s5" Feb 26 11:46:00 crc kubenswrapper[4699]: I0226 11:46:00.139536 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:46:00 crc kubenswrapper[4699]: I0226 11:46:00.139905 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:46:00 crc kubenswrapper[4699]: I0226 11:46:00.140311 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:46:00 crc kubenswrapper[4699]: I0226 11:46:00.150468 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535106-cv2s5"] Feb 26 11:46:00 crc kubenswrapper[4699]: I0226 11:46:00.164267 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h47mg\" (UniqueName: \"kubernetes.io/projected/277ed376-d775-489c-82e7-93962bd513ff-kube-api-access-h47mg\") pod \"auto-csr-approver-29535106-cv2s5\" (UID: \"277ed376-d775-489c-82e7-93962bd513ff\") " pod="openshift-infra/auto-csr-approver-29535106-cv2s5" Feb 26 11:46:00 crc kubenswrapper[4699]: I0226 11:46:00.266661 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h47mg\" (UniqueName: \"kubernetes.io/projected/277ed376-d775-489c-82e7-93962bd513ff-kube-api-access-h47mg\") pod \"auto-csr-approver-29535106-cv2s5\" (UID: \"277ed376-d775-489c-82e7-93962bd513ff\") " pod="openshift-infra/auto-csr-approver-29535106-cv2s5" Feb 26 11:46:00 crc kubenswrapper[4699]: I0226 11:46:00.283970 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h47mg\" (UniqueName: \"kubernetes.io/projected/277ed376-d775-489c-82e7-93962bd513ff-kube-api-access-h47mg\") pod \"auto-csr-approver-29535106-cv2s5\" (UID: \"277ed376-d775-489c-82e7-93962bd513ff\") " pod="openshift-infra/auto-csr-approver-29535106-cv2s5" Feb 26 11:46:00 crc kubenswrapper[4699]: I0226 11:46:00.455242 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535106-cv2s5" Feb 26 11:46:00 crc kubenswrapper[4699]: I0226 11:46:00.875098 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535106-cv2s5"] Feb 26 11:46:01 crc kubenswrapper[4699]: I0226 11:46:01.016942 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535106-cv2s5" event={"ID":"277ed376-d775-489c-82e7-93962bd513ff","Type":"ContainerStarted","Data":"741fa4c3f9f588aa570fbe95633265e19299d4dd53f53505e7fa07dabc271811"} Feb 26 11:46:03 crc kubenswrapper[4699]: I0226 11:46:03.037748 4699 generic.go:334] "Generic (PLEG): container finished" podID="277ed376-d775-489c-82e7-93962bd513ff" containerID="6316bd489dab2ee525da2e5168f12e3d42a5b7c5139e77da702337350ea3b44a" exitCode=0 Feb 26 11:46:03 crc kubenswrapper[4699]: I0226 11:46:03.038023 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535106-cv2s5" event={"ID":"277ed376-d775-489c-82e7-93962bd513ff","Type":"ContainerDied","Data":"6316bd489dab2ee525da2e5168f12e3d42a5b7c5139e77da702337350ea3b44a"} Feb 26 11:46:04 crc kubenswrapper[4699]: I0226 11:46:04.352098 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535106-cv2s5" Feb 26 11:46:04 crc kubenswrapper[4699]: I0226 11:46:04.460845 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h47mg\" (UniqueName: \"kubernetes.io/projected/277ed376-d775-489c-82e7-93962bd513ff-kube-api-access-h47mg\") pod \"277ed376-d775-489c-82e7-93962bd513ff\" (UID: \"277ed376-d775-489c-82e7-93962bd513ff\") " Feb 26 11:46:04 crc kubenswrapper[4699]: I0226 11:46:04.466735 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277ed376-d775-489c-82e7-93962bd513ff-kube-api-access-h47mg" (OuterVolumeSpecName: "kube-api-access-h47mg") pod "277ed376-d775-489c-82e7-93962bd513ff" (UID: "277ed376-d775-489c-82e7-93962bd513ff"). InnerVolumeSpecName "kube-api-access-h47mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:46:04 crc kubenswrapper[4699]: I0226 11:46:04.565658 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h47mg\" (UniqueName: \"kubernetes.io/projected/277ed376-d775-489c-82e7-93962bd513ff-kube-api-access-h47mg\") on node \"crc\" DevicePath \"\"" Feb 26 11:46:05 crc kubenswrapper[4699]: I0226 11:46:05.057663 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535106-cv2s5" event={"ID":"277ed376-d775-489c-82e7-93962bd513ff","Type":"ContainerDied","Data":"741fa4c3f9f588aa570fbe95633265e19299d4dd53f53505e7fa07dabc271811"} Feb 26 11:46:05 crc kubenswrapper[4699]: I0226 11:46:05.057707 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="741fa4c3f9f588aa570fbe95633265e19299d4dd53f53505e7fa07dabc271811" Feb 26 11:46:05 crc kubenswrapper[4699]: I0226 11:46:05.057785 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535106-cv2s5" Feb 26 11:46:05 crc kubenswrapper[4699]: I0226 11:46:05.438694 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535100-2fxw5"] Feb 26 11:46:05 crc kubenswrapper[4699]: I0226 11:46:05.445807 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535100-2fxw5"] Feb 26 11:46:06 crc kubenswrapper[4699]: I0226 11:46:06.275738 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db34348f-7e21-4666-8e45-c48a1fdbe2a4" path="/var/lib/kubelet/pods/db34348f-7e21-4666-8e45-c48a1fdbe2a4/volumes" Feb 26 11:46:11 crc kubenswrapper[4699]: I0226 11:46:11.585468 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:46:11 crc kubenswrapper[4699]: I0226 11:46:11.586276 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:46:11 crc kubenswrapper[4699]: I0226 11:46:11.586344 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:46:11 crc kubenswrapper[4699]: I0226 11:46:11.587455 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ef034a72d27c84dbd807adb1a50ce258b1b8022f1d940a8fb612e62f1d33345"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:46:11 crc kubenswrapper[4699]: I0226 11:46:11.587535 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://6ef034a72d27c84dbd807adb1a50ce258b1b8022f1d940a8fb612e62f1d33345" gracePeriod=600 Feb 26 11:46:12 crc kubenswrapper[4699]: I0226 11:46:12.121584 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="6ef034a72d27c84dbd807adb1a50ce258b1b8022f1d940a8fb612e62f1d33345" exitCode=0 Feb 26 11:46:12 crc kubenswrapper[4699]: I0226 11:46:12.121617 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"6ef034a72d27c84dbd807adb1a50ce258b1b8022f1d940a8fb612e62f1d33345"} Feb 26 11:46:12 crc kubenswrapper[4699]: I0226 11:46:12.122215 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde"} Feb 26 11:46:12 crc kubenswrapper[4699]: I0226 11:46:12.122240 4699 scope.go:117] "RemoveContainer" containerID="44e3df3b72d5e05b5e3932b70a80af4e9396963ab4e1e6f5434efe863b881c99" Feb 26 11:46:31 crc kubenswrapper[4699]: I0226 11:46:31.288024 4699 scope.go:117] "RemoveContainer" containerID="4505b88d80198e91d210a89e948ba5fb9b137a6a7006ae878e49e6ab4a45d98a" Feb 26 11:46:59 crc kubenswrapper[4699]: I0226 11:46:59.544227 4699 generic.go:334] "Generic (PLEG): container finished" podID="dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b" containerID="ee40b4b1a8d8cb5e3af0d9816425a6ede7800a1df3aca66053e669a22650ea0b" exitCode=0 Feb 26 11:46:59 crc kubenswrapper[4699]: I0226 11:46:59.544338 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" event={"ID":"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b","Type":"ContainerDied","Data":"ee40b4b1a8d8cb5e3af0d9816425a6ede7800a1df3aca66053e669a22650ea0b"} Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.031014 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.147851 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovn-combined-ca-bundle\") pod \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.148206 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ssh-key-openstack-edpm-ipam\") pod \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.148239 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovncontroller-config-0\") pod \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.148275 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sn7j\" (UniqueName: \"kubernetes.io/projected/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-kube-api-access-7sn7j\") pod \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.148317 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-inventory\") pod \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\" (UID: \"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b\") " Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.155812 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-kube-api-access-7sn7j" (OuterVolumeSpecName: "kube-api-access-7sn7j") pod "dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b" (UID: "dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b"). InnerVolumeSpecName "kube-api-access-7sn7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.157832 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b" (UID: "dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.176638 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b" (UID: "dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.185167 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b" (UID: "dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.187385 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-inventory" (OuterVolumeSpecName: "inventory") pod "dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b" (UID: "dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.251608 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.251648 4699 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.251659 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sn7j\" (UniqueName: \"kubernetes.io/projected/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-kube-api-access-7sn7j\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.251668 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.251678 4699 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.567671 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" event={"ID":"dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b","Type":"ContainerDied","Data":"6fec481dc036c5b83368c4004e881ded73cc4b939fc8a01a5a352115b40fddcc"} Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.567721 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fec481dc036c5b83368c4004e881ded73cc4b939fc8a01a5a352115b40fddcc" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.567742 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hmpqg" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.687583 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l"] Feb 26 11:47:01 crc kubenswrapper[4699]: E0226 11:47:01.688072 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277ed376-d775-489c-82e7-93962bd513ff" containerName="oc" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.688096 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="277ed376-d775-489c-82e7-93962bd513ff" containerName="oc" Feb 26 11:47:01 crc kubenswrapper[4699]: E0226 11:47:01.688160 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.688172 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.688431 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.688455 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="277ed376-d775-489c-82e7-93962bd513ff" containerName="oc" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.689235 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.691456 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.691498 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.691822 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.691846 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.692174 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.693100 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.702814 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l"] Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.862501 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.862552 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.862585 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.862609 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.862675 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjv4v\" (UniqueName: \"kubernetes.io/projected/59456382-a459-4f82-ac99-b96eb735ddb9-kube-api-access-tjv4v\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.863054 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.964735 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.964896 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.964959 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.964983 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.965077 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.965139 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjv4v\" (UniqueName: \"kubernetes.io/projected/59456382-a459-4f82-ac99-b96eb735ddb9-kube-api-access-tjv4v\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.969267 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.969294 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.969365 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.969770 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.970913 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:01 crc kubenswrapper[4699]: I0226 11:47:01.984004 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjv4v\" (UniqueName: \"kubernetes.io/projected/59456382-a459-4f82-ac99-b96eb735ddb9-kube-api-access-tjv4v\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:02 crc kubenswrapper[4699]: I0226 11:47:02.006589 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:02 crc kubenswrapper[4699]: I0226 11:47:02.542711 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l"] Feb 26 11:47:02 crc kubenswrapper[4699]: I0226 11:47:02.549883 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:47:02 crc kubenswrapper[4699]: I0226 11:47:02.583889 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" event={"ID":"59456382-a459-4f82-ac99-b96eb735ddb9","Type":"ContainerStarted","Data":"b65ffd3662b40c51c98c1dd30170152c5d509e1d0fe771319b3ef00e26682063"} Feb 26 11:47:03 crc kubenswrapper[4699]: I0226 11:47:03.594824 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" event={"ID":"59456382-a459-4f82-ac99-b96eb735ddb9","Type":"ContainerStarted","Data":"08a5874a3b7c9d905481c4a6b7b1f36886135a1f3140e5983bc7888075a8dbaa"} Feb 26 11:47:03 crc kubenswrapper[4699]: I0226 11:47:03.613686 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" podStartSLOduration=2.000379345 podStartE2EDuration="2.613643779s" podCreationTimestamp="2026-02-26 11:47:01 +0000 UTC" firstStartedPulling="2026-02-26 11:47:02.549638715 +0000 UTC m=+2168.360465149" lastFinishedPulling="2026-02-26 11:47:03.162903149 +0000 UTC m=+2168.973729583" observedRunningTime="2026-02-26 11:47:03.610189011 +0000 UTC m=+2169.421015455" watchObservedRunningTime="2026-02-26 11:47:03.613643779 +0000 UTC m=+2169.424470223" Feb 26 11:47:10 crc kubenswrapper[4699]: I0226 11:47:10.747377 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6t992"] Feb 26 11:47:10 crc kubenswrapper[4699]: I0226 11:47:10.750454 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:10 crc kubenswrapper[4699]: I0226 11:47:10.757799 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6t992"] Feb 26 11:47:10 crc kubenswrapper[4699]: I0226 11:47:10.882670 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-utilities\") pod \"redhat-operators-6t992\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:10 crc kubenswrapper[4699]: I0226 11:47:10.883440 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-catalog-content\") pod \"redhat-operators-6t992\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:10 crc kubenswrapper[4699]: I0226 11:47:10.883575 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfpcd\" (UniqueName: \"kubernetes.io/projected/17908e12-55e0-4e17-9ffb-a33a2208c13c-kube-api-access-pfpcd\") pod \"redhat-operators-6t992\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:10 crc kubenswrapper[4699]: I0226 11:47:10.985396 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-utilities\") pod \"redhat-operators-6t992\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:10 crc kubenswrapper[4699]: I0226 11:47:10.985465 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-catalog-content\") pod \"redhat-operators-6t992\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:10 crc kubenswrapper[4699]: I0226 11:47:10.985515 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfpcd\" (UniqueName: \"kubernetes.io/projected/17908e12-55e0-4e17-9ffb-a33a2208c13c-kube-api-access-pfpcd\") pod \"redhat-operators-6t992\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:10 crc kubenswrapper[4699]: I0226 11:47:10.985973 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-utilities\") pod \"redhat-operators-6t992\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:10 crc kubenswrapper[4699]: I0226 11:47:10.986052 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-catalog-content\") pod \"redhat-operators-6t992\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:11 crc kubenswrapper[4699]: I0226 11:47:11.011572 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfpcd\" (UniqueName: \"kubernetes.io/projected/17908e12-55e0-4e17-9ffb-a33a2208c13c-kube-api-access-pfpcd\") pod \"redhat-operators-6t992\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:11 crc kubenswrapper[4699]: I0226 11:47:11.071232 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:11 crc kubenswrapper[4699]: I0226 11:47:11.632472 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6t992"] Feb 26 11:47:11 crc kubenswrapper[4699]: I0226 11:47:11.711948 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t992" event={"ID":"17908e12-55e0-4e17-9ffb-a33a2208c13c","Type":"ContainerStarted","Data":"27550c0b57d27ece73dfcb2f3c7ecab7a2bc0f6ea93790910de25028b7547595"} Feb 26 11:47:12 crc kubenswrapper[4699]: I0226 11:47:12.723371 4699 generic.go:334] "Generic (PLEG): container finished" podID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerID="2ce84e69f7d9b5f03ad4ad405f0853dcfef618c84355f2c04940b92b37190ddf" exitCode=0 Feb 26 11:47:12 crc kubenswrapper[4699]: I0226 11:47:12.723711 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t992" event={"ID":"17908e12-55e0-4e17-9ffb-a33a2208c13c","Type":"ContainerDied","Data":"2ce84e69f7d9b5f03ad4ad405f0853dcfef618c84355f2c04940b92b37190ddf"} Feb 26 11:47:13 crc kubenswrapper[4699]: I0226 11:47:13.734071 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t992" event={"ID":"17908e12-55e0-4e17-9ffb-a33a2208c13c","Type":"ContainerStarted","Data":"8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a"} Feb 26 11:47:19 crc kubenswrapper[4699]: I0226 11:47:19.782725 4699 generic.go:334] "Generic (PLEG): container finished" podID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerID="8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a" exitCode=0 Feb 26 11:47:19 crc kubenswrapper[4699]: I0226 11:47:19.782836 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t992" event={"ID":"17908e12-55e0-4e17-9ffb-a33a2208c13c","Type":"ContainerDied","Data":"8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a"} Feb 26 11:47:20 crc kubenswrapper[4699]: I0226 11:47:20.794706 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t992" event={"ID":"17908e12-55e0-4e17-9ffb-a33a2208c13c","Type":"ContainerStarted","Data":"e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a"} Feb 26 11:47:20 crc kubenswrapper[4699]: I0226 11:47:20.816542 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6t992" podStartSLOduration=3.227941036 podStartE2EDuration="10.816507841s" podCreationTimestamp="2026-02-26 11:47:10 +0000 UTC" firstStartedPulling="2026-02-26 11:47:12.725995906 +0000 UTC m=+2178.536822340" lastFinishedPulling="2026-02-26 11:47:20.314562711 +0000 UTC m=+2186.125389145" observedRunningTime="2026-02-26 11:47:20.811599711 +0000 UTC m=+2186.622426135" watchObservedRunningTime="2026-02-26 11:47:20.816507841 +0000 UTC m=+2186.627334275" Feb 26 11:47:21 crc kubenswrapper[4699]: I0226 11:47:21.072417 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:21 crc kubenswrapper[4699]: I0226 11:47:21.072473 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:22 crc kubenswrapper[4699]: I0226 11:47:22.123094 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6t992" podUID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerName="registry-server" probeResult="failure" output=< Feb 26 11:47:22 crc kubenswrapper[4699]: timeout: failed to connect service ":50051" within 1s Feb 26 11:47:22 crc kubenswrapper[4699]: > Feb 26 11:47:31 crc kubenswrapper[4699]: I0226 11:47:31.123770 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:31 crc kubenswrapper[4699]: I0226 11:47:31.170146 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:31 crc kubenswrapper[4699]: I0226 11:47:31.357653 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6t992"] Feb 26 11:47:32 crc kubenswrapper[4699]: I0226 11:47:32.893082 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6t992" podUID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerName="registry-server" containerID="cri-o://e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a" gracePeriod=2 Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.356812 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.520088 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-utilities\") pod \"17908e12-55e0-4e17-9ffb-a33a2208c13c\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.520283 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfpcd\" (UniqueName: \"kubernetes.io/projected/17908e12-55e0-4e17-9ffb-a33a2208c13c-kube-api-access-pfpcd\") pod \"17908e12-55e0-4e17-9ffb-a33a2208c13c\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.520312 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-catalog-content\") pod \"17908e12-55e0-4e17-9ffb-a33a2208c13c\" (UID: \"17908e12-55e0-4e17-9ffb-a33a2208c13c\") " Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.521812 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-utilities" (OuterVolumeSpecName: "utilities") pod "17908e12-55e0-4e17-9ffb-a33a2208c13c" (UID: "17908e12-55e0-4e17-9ffb-a33a2208c13c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.527666 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17908e12-55e0-4e17-9ffb-a33a2208c13c-kube-api-access-pfpcd" (OuterVolumeSpecName: "kube-api-access-pfpcd") pod "17908e12-55e0-4e17-9ffb-a33a2208c13c" (UID: "17908e12-55e0-4e17-9ffb-a33a2208c13c"). InnerVolumeSpecName "kube-api-access-pfpcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.622627 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.622659 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfpcd\" (UniqueName: \"kubernetes.io/projected/17908e12-55e0-4e17-9ffb-a33a2208c13c-kube-api-access-pfpcd\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.639382 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17908e12-55e0-4e17-9ffb-a33a2208c13c" (UID: "17908e12-55e0-4e17-9ffb-a33a2208c13c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.724745 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17908e12-55e0-4e17-9ffb-a33a2208c13c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.906360 4699 generic.go:334] "Generic (PLEG): container finished" podID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerID="e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a" exitCode=0 Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.906401 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t992" event={"ID":"17908e12-55e0-4e17-9ffb-a33a2208c13c","Type":"ContainerDied","Data":"e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a"} Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.906432 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t992" event={"ID":"17908e12-55e0-4e17-9ffb-a33a2208c13c","Type":"ContainerDied","Data":"27550c0b57d27ece73dfcb2f3c7ecab7a2bc0f6ea93790910de25028b7547595"} Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.906450 4699 scope.go:117] "RemoveContainer" containerID="e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a" Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.906496 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t992" Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.928445 4699 scope.go:117] "RemoveContainer" containerID="8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a" Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.953859 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6t992"] Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.968518 4699 scope.go:117] "RemoveContainer" containerID="2ce84e69f7d9b5f03ad4ad405f0853dcfef618c84355f2c04940b92b37190ddf" Feb 26 11:47:33 crc kubenswrapper[4699]: I0226 11:47:33.993324 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6t992"] Feb 26 11:47:34 crc kubenswrapper[4699]: I0226 11:47:34.017033 4699 scope.go:117] "RemoveContainer" containerID="e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a" Feb 26 11:47:34 crc kubenswrapper[4699]: E0226 11:47:34.017781 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a\": container with ID starting with e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a not found: ID does not exist" containerID="e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a" Feb 26 11:47:34 crc kubenswrapper[4699]: I0226 11:47:34.017867 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a"} err="failed to get container status \"e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a\": rpc error: code = NotFound desc = could not find container \"e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a\": container with ID starting with e04e82cff708119a13028fa8ab510b921da08edef6a16507873b418d4c86cb1a not found: ID does not exist" Feb 26 11:47:34 crc kubenswrapper[4699]: I0226 11:47:34.017899 4699 scope.go:117] "RemoveContainer" containerID="8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a" Feb 26 11:47:34 crc kubenswrapper[4699]: E0226 11:47:34.018584 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a\": container with ID starting with 8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a not found: ID does not exist" containerID="8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a" Feb 26 11:47:34 crc kubenswrapper[4699]: I0226 11:47:34.018619 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a"} err="failed to get container status \"8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a\": rpc error: code = NotFound desc = could not find container \"8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a\": container with ID starting with 8d0b1ac4594d259abec4b98cdcc147f8d8a0f1d0e1fa7c1dc5a734331fac319a not found: ID does not exist" Feb 26 11:47:34 crc kubenswrapper[4699]: I0226 11:47:34.018644 4699 scope.go:117] "RemoveContainer" containerID="2ce84e69f7d9b5f03ad4ad405f0853dcfef618c84355f2c04940b92b37190ddf" Feb 26 11:47:34 crc kubenswrapper[4699]: E0226 11:47:34.018969 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce84e69f7d9b5f03ad4ad405f0853dcfef618c84355f2c04940b92b37190ddf\": container with ID starting with 2ce84e69f7d9b5f03ad4ad405f0853dcfef618c84355f2c04940b92b37190ddf not found: ID does not exist" containerID="2ce84e69f7d9b5f03ad4ad405f0853dcfef618c84355f2c04940b92b37190ddf" Feb 26 11:47:34 crc kubenswrapper[4699]: I0226 11:47:34.018995 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce84e69f7d9b5f03ad4ad405f0853dcfef618c84355f2c04940b92b37190ddf"} err="failed to get container status \"2ce84e69f7d9b5f03ad4ad405f0853dcfef618c84355f2c04940b92b37190ddf\": rpc error: code = NotFound desc = could not find container \"2ce84e69f7d9b5f03ad4ad405f0853dcfef618c84355f2c04940b92b37190ddf\": container with ID starting with 2ce84e69f7d9b5f03ad4ad405f0853dcfef618c84355f2c04940b92b37190ddf not found: ID does not exist" Feb 26 11:47:34 crc kubenswrapper[4699]: I0226 11:47:34.274211 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17908e12-55e0-4e17-9ffb-a33a2208c13c" path="/var/lib/kubelet/pods/17908e12-55e0-4e17-9ffb-a33a2208c13c/volumes" Feb 26 11:47:52 crc kubenswrapper[4699]: I0226 11:47:52.135631 4699 generic.go:334] "Generic (PLEG): container finished" podID="59456382-a459-4f82-ac99-b96eb735ddb9" containerID="08a5874a3b7c9d905481c4a6b7b1f36886135a1f3140e5983bc7888075a8dbaa" exitCode=0 Feb 26 11:47:52 crc kubenswrapper[4699]: I0226 11:47:52.135715 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" event={"ID":"59456382-a459-4f82-ac99-b96eb735ddb9","Type":"ContainerDied","Data":"08a5874a3b7c9d905481c4a6b7b1f36886135a1f3140e5983bc7888075a8dbaa"} Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.622704 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.810637 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjv4v\" (UniqueName: \"kubernetes.io/projected/59456382-a459-4f82-ac99-b96eb735ddb9-kube-api-access-tjv4v\") pod \"59456382-a459-4f82-ac99-b96eb735ddb9\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.810723 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-nova-metadata-neutron-config-0\") pod \"59456382-a459-4f82-ac99-b96eb735ddb9\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.810752 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-inventory\") pod \"59456382-a459-4f82-ac99-b96eb735ddb9\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.810786 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-metadata-combined-ca-bundle\") pod \"59456382-a459-4f82-ac99-b96eb735ddb9\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.810992 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"59456382-a459-4f82-ac99-b96eb735ddb9\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.811072 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-ssh-key-openstack-edpm-ipam\") pod \"59456382-a459-4f82-ac99-b96eb735ddb9\" (UID: \"59456382-a459-4f82-ac99-b96eb735ddb9\") " Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.818649 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59456382-a459-4f82-ac99-b96eb735ddb9-kube-api-access-tjv4v" (OuterVolumeSpecName: "kube-api-access-tjv4v") pod "59456382-a459-4f82-ac99-b96eb735ddb9" (UID: "59456382-a459-4f82-ac99-b96eb735ddb9"). InnerVolumeSpecName "kube-api-access-tjv4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.821104 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "59456382-a459-4f82-ac99-b96eb735ddb9" (UID: "59456382-a459-4f82-ac99-b96eb735ddb9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.846125 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "59456382-a459-4f82-ac99-b96eb735ddb9" (UID: "59456382-a459-4f82-ac99-b96eb735ddb9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.855065 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "59456382-a459-4f82-ac99-b96eb735ddb9" (UID: "59456382-a459-4f82-ac99-b96eb735ddb9"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.856913 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-inventory" (OuterVolumeSpecName: "inventory") pod "59456382-a459-4f82-ac99-b96eb735ddb9" (UID: "59456382-a459-4f82-ac99-b96eb735ddb9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.862415 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "59456382-a459-4f82-ac99-b96eb735ddb9" (UID: "59456382-a459-4f82-ac99-b96eb735ddb9"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.981283 4699 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.981331 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.981345 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjv4v\" (UniqueName: \"kubernetes.io/projected/59456382-a459-4f82-ac99-b96eb735ddb9-kube-api-access-tjv4v\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.981359 4699 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.981369 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:53 crc kubenswrapper[4699]: I0226 11:47:53.981379 4699 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59456382-a459-4f82-ac99-b96eb735ddb9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.157989 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" event={"ID":"59456382-a459-4f82-ac99-b96eb735ddb9","Type":"ContainerDied","Data":"b65ffd3662b40c51c98c1dd30170152c5d509e1d0fe771319b3ef00e26682063"} Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.158033 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b65ffd3662b40c51c98c1dd30170152c5d509e1d0fe771319b3ef00e26682063" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.158052 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.250147 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f"] Feb 26 11:47:54 crc kubenswrapper[4699]: E0226 11:47:54.252676 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerName="extract-content" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.252705 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerName="extract-content" Feb 26 11:47:54 crc kubenswrapper[4699]: E0226 11:47:54.252723 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerName="registry-server" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.252732 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerName="registry-server" Feb 26 11:47:54 crc kubenswrapper[4699]: E0226 11:47:54.252752 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59456382-a459-4f82-ac99-b96eb735ddb9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.252760 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="59456382-a459-4f82-ac99-b96eb735ddb9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:54 crc kubenswrapper[4699]: E0226 11:47:54.252774 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerName="extract-utilities" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.252781 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerName="extract-utilities" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.253012 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="59456382-a459-4f82-ac99-b96eb735ddb9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.253033 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="17908e12-55e0-4e17-9ffb-a33a2208c13c" containerName="registry-server" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.253711 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.257267 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.257276 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.257387 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.257424 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.257648 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.277654 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f"] Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.389298 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.390251 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.390432 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.390583 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.390624 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57ppn\" (UniqueName: \"kubernetes.io/projected/6436c321-6850-4db3-81b2-0dc329e10900-kube-api-access-57ppn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.492313 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.492397 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.492424 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57ppn\" (UniqueName: \"kubernetes.io/projected/6436c321-6850-4db3-81b2-0dc329e10900-kube-api-access-57ppn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.492483 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.492547 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.496578 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.496585 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.496596 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.503783 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.513714 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57ppn\" (UniqueName: \"kubernetes.io/projected/6436c321-6850-4db3-81b2-0dc329e10900-kube-api-access-57ppn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:54 crc kubenswrapper[4699]: I0226 11:47:54.585576 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:47:55 crc kubenswrapper[4699]: I0226 11:47:55.120723 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f"] Feb 26 11:47:55 crc kubenswrapper[4699]: I0226 11:47:55.167702 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" event={"ID":"6436c321-6850-4db3-81b2-0dc329e10900","Type":"ContainerStarted","Data":"da7eabc20b73f3cfcb5f479d6c26b5a779dbd02d4697b1b42ef3653df7b2ae5b"} Feb 26 11:47:56 crc kubenswrapper[4699]: I0226 11:47:56.178715 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" event={"ID":"6436c321-6850-4db3-81b2-0dc329e10900","Type":"ContainerStarted","Data":"8bd3df01daa0942902ceb3f721b2d365aa21e62ede502d0a6f006ad1267cec53"} Feb 26 11:47:56 crc kubenswrapper[4699]: I0226 11:47:56.200376 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" podStartSLOduration=1.3859444220000001 podStartE2EDuration="2.200354882s" podCreationTimestamp="2026-02-26 11:47:54 +0000 UTC" firstStartedPulling="2026-02-26 11:47:55.122782917 +0000 UTC m=+2220.933609351" lastFinishedPulling="2026-02-26 11:47:55.937193377 +0000 UTC m=+2221.748019811" observedRunningTime="2026-02-26 11:47:56.196540364 +0000 UTC m=+2222.007366798" watchObservedRunningTime="2026-02-26 11:47:56.200354882 +0000 UTC m=+2222.011181316" Feb 26 11:48:00 crc kubenswrapper[4699]: I0226 11:48:00.137052 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535108-79cdj"] Feb 26 11:48:00 crc kubenswrapper[4699]: I0226 11:48:00.139582 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535108-79cdj" Feb 26 11:48:00 crc kubenswrapper[4699]: I0226 11:48:00.142834 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:48:00 crc kubenswrapper[4699]: I0226 11:48:00.143334 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:48:00 crc kubenswrapper[4699]: I0226 11:48:00.143636 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:48:00 crc kubenswrapper[4699]: I0226 11:48:00.155919 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535108-79cdj"] Feb 26 11:48:00 crc kubenswrapper[4699]: I0226 11:48:00.255012 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgwkv\" (UniqueName: \"kubernetes.io/projected/6366100d-f68c-43ce-879b-4cc3f80c8156-kube-api-access-wgwkv\") pod \"auto-csr-approver-29535108-79cdj\" (UID: \"6366100d-f68c-43ce-879b-4cc3f80c8156\") " pod="openshift-infra/auto-csr-approver-29535108-79cdj" Feb 26 11:48:00 crc kubenswrapper[4699]: I0226 11:48:00.357207 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgwkv\" (UniqueName: \"kubernetes.io/projected/6366100d-f68c-43ce-879b-4cc3f80c8156-kube-api-access-wgwkv\") pod \"auto-csr-approver-29535108-79cdj\" (UID: \"6366100d-f68c-43ce-879b-4cc3f80c8156\") " pod="openshift-infra/auto-csr-approver-29535108-79cdj" Feb 26 11:48:00 crc kubenswrapper[4699]: I0226 11:48:00.381739 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgwkv\" (UniqueName: \"kubernetes.io/projected/6366100d-f68c-43ce-879b-4cc3f80c8156-kube-api-access-wgwkv\") pod \"auto-csr-approver-29535108-79cdj\" (UID: \"6366100d-f68c-43ce-879b-4cc3f80c8156\") " pod="openshift-infra/auto-csr-approver-29535108-79cdj" Feb 26 11:48:00 crc kubenswrapper[4699]: I0226 11:48:00.467242 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535108-79cdj" Feb 26 11:48:00 crc kubenswrapper[4699]: I0226 11:48:00.937721 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535108-79cdj"] Feb 26 11:48:01 crc kubenswrapper[4699]: I0226 11:48:01.261463 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535108-79cdj" event={"ID":"6366100d-f68c-43ce-879b-4cc3f80c8156","Type":"ContainerStarted","Data":"9550e0566c3abf9a4ff53e5eebe42cb9ed71dc39b77b450749cfbf15b78168d7"} Feb 26 11:48:04 crc kubenswrapper[4699]: I0226 11:48:04.299834 4699 generic.go:334] "Generic (PLEG): container finished" podID="6366100d-f68c-43ce-879b-4cc3f80c8156" containerID="a49e0f6a5b8aa98c17ff2dc316f41da6f3d780c3f18aaef30599837dcc6bc0ea" exitCode=0 Feb 26 11:48:04 crc kubenswrapper[4699]: I0226 11:48:04.299946 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535108-79cdj" event={"ID":"6366100d-f68c-43ce-879b-4cc3f80c8156","Type":"ContainerDied","Data":"a49e0f6a5b8aa98c17ff2dc316f41da6f3d780c3f18aaef30599837dcc6bc0ea"} Feb 26 11:48:05 crc kubenswrapper[4699]: I0226 11:48:05.680043 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535108-79cdj" Feb 26 11:48:05 crc kubenswrapper[4699]: I0226 11:48:05.808875 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgwkv\" (UniqueName: \"kubernetes.io/projected/6366100d-f68c-43ce-879b-4cc3f80c8156-kube-api-access-wgwkv\") pod \"6366100d-f68c-43ce-879b-4cc3f80c8156\" (UID: \"6366100d-f68c-43ce-879b-4cc3f80c8156\") " Feb 26 11:48:05 crc kubenswrapper[4699]: I0226 11:48:05.815504 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6366100d-f68c-43ce-879b-4cc3f80c8156-kube-api-access-wgwkv" (OuterVolumeSpecName: "kube-api-access-wgwkv") pod "6366100d-f68c-43ce-879b-4cc3f80c8156" (UID: "6366100d-f68c-43ce-879b-4cc3f80c8156"). InnerVolumeSpecName "kube-api-access-wgwkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:48:05 crc kubenswrapper[4699]: I0226 11:48:05.910736 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgwkv\" (UniqueName: \"kubernetes.io/projected/6366100d-f68c-43ce-879b-4cc3f80c8156-kube-api-access-wgwkv\") on node \"crc\" DevicePath \"\"" Feb 26 11:48:06 crc kubenswrapper[4699]: I0226 11:48:06.319061 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535108-79cdj" event={"ID":"6366100d-f68c-43ce-879b-4cc3f80c8156","Type":"ContainerDied","Data":"9550e0566c3abf9a4ff53e5eebe42cb9ed71dc39b77b450749cfbf15b78168d7"} Feb 26 11:48:06 crc kubenswrapper[4699]: I0226 11:48:06.319452 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9550e0566c3abf9a4ff53e5eebe42cb9ed71dc39b77b450749cfbf15b78168d7" Feb 26 11:48:06 crc kubenswrapper[4699]: I0226 11:48:06.319143 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535108-79cdj" Feb 26 11:48:06 crc kubenswrapper[4699]: I0226 11:48:06.753922 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535102-2zbvr"] Feb 26 11:48:06 crc kubenswrapper[4699]: I0226 11:48:06.762852 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535102-2zbvr"] Feb 26 11:48:08 crc kubenswrapper[4699]: I0226 11:48:08.272475 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984" path="/var/lib/kubelet/pods/1b7b6e2c-94b0-4cfd-9b33-f988c4a8c984/volumes" Feb 26 11:48:11 crc kubenswrapper[4699]: I0226 11:48:11.584929 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:48:11 crc kubenswrapper[4699]: I0226 11:48:11.585657 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:48:31 crc kubenswrapper[4699]: I0226 11:48:31.385803 4699 scope.go:117] "RemoveContainer" containerID="dfc62ad99cdddeccaa0a04e48b0be130dad6cc30569fc90d45e5fa7beabda285" Feb 26 11:48:41 crc kubenswrapper[4699]: I0226 11:48:41.594007 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:48:41 crc kubenswrapper[4699]: I0226 11:48:41.595520 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:49:11 crc kubenswrapper[4699]: I0226 11:49:11.585527 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:49:11 crc kubenswrapper[4699]: I0226 11:49:11.586235 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:49:11 crc kubenswrapper[4699]: I0226 11:49:11.586304 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:49:11 crc kubenswrapper[4699]: I0226 11:49:11.587459 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:49:11 crc kubenswrapper[4699]: I0226 11:49:11.587545 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" gracePeriod=600 Feb 26 11:49:11 crc kubenswrapper[4699]: E0226 11:49:11.716613 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:49:11 crc kubenswrapper[4699]: I0226 11:49:11.969927 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" exitCode=0 Feb 26 11:49:11 crc kubenswrapper[4699]: I0226 11:49:11.969982 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde"} Feb 26 11:49:11 crc kubenswrapper[4699]: I0226 11:49:11.970384 4699 scope.go:117] "RemoveContainer" containerID="6ef034a72d27c84dbd807adb1a50ce258b1b8022f1d940a8fb612e62f1d33345" Feb 26 11:49:11 crc kubenswrapper[4699]: I0226 11:49:11.972001 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:49:11 crc kubenswrapper[4699]: E0226 11:49:11.973309 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:49:25 crc kubenswrapper[4699]: I0226 11:49:25.261859 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:49:25 crc kubenswrapper[4699]: E0226 11:49:25.262870 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:49:38 crc kubenswrapper[4699]: I0226 11:49:38.261038 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:49:38 crc kubenswrapper[4699]: E0226 11:49:38.261945 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:49:51 crc kubenswrapper[4699]: I0226 11:49:51.261247 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:49:51 crc kubenswrapper[4699]: E0226 11:49:51.262082 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.151875 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535110-g2n8d"] Feb 26 11:50:00 crc kubenswrapper[4699]: E0226 11:50:00.153230 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6366100d-f68c-43ce-879b-4cc3f80c8156" containerName="oc" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.153247 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6366100d-f68c-43ce-879b-4cc3f80c8156" containerName="oc" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.153530 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6366100d-f68c-43ce-879b-4cc3f80c8156" containerName="oc" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.154384 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535110-g2n8d" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.157382 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.157475 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.157524 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.175281 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535110-g2n8d"] Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.258224 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtf98\" (UniqueName: \"kubernetes.io/projected/e270a2c1-b1c4-498d-9adf-a3cbb51defce-kube-api-access-gtf98\") pod \"auto-csr-approver-29535110-g2n8d\" (UID: \"e270a2c1-b1c4-498d-9adf-a3cbb51defce\") " pod="openshift-infra/auto-csr-approver-29535110-g2n8d" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.360259 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtf98\" (UniqueName: \"kubernetes.io/projected/e270a2c1-b1c4-498d-9adf-a3cbb51defce-kube-api-access-gtf98\") pod \"auto-csr-approver-29535110-g2n8d\" (UID: \"e270a2c1-b1c4-498d-9adf-a3cbb51defce\") " pod="openshift-infra/auto-csr-approver-29535110-g2n8d" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.381853 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtf98\" (UniqueName: \"kubernetes.io/projected/e270a2c1-b1c4-498d-9adf-a3cbb51defce-kube-api-access-gtf98\") pod \"auto-csr-approver-29535110-g2n8d\" (UID: \"e270a2c1-b1c4-498d-9adf-a3cbb51defce\") " pod="openshift-infra/auto-csr-approver-29535110-g2n8d" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.479556 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535110-g2n8d" Feb 26 11:50:00 crc kubenswrapper[4699]: I0226 11:50:00.963154 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535110-g2n8d"] Feb 26 11:50:01 crc kubenswrapper[4699]: I0226 11:50:01.419944 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535110-g2n8d" event={"ID":"e270a2c1-b1c4-498d-9adf-a3cbb51defce","Type":"ContainerStarted","Data":"c9a043ff431ac00b2450dceb37f7050fd1f84e1b7f33e8aa986d6da1ce100586"} Feb 26 11:50:03 crc kubenswrapper[4699]: I0226 11:50:03.261589 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:50:03 crc kubenswrapper[4699]: E0226 11:50:03.262547 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:50:04 crc kubenswrapper[4699]: I0226 11:50:04.447535 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535110-g2n8d" event={"ID":"e270a2c1-b1c4-498d-9adf-a3cbb51defce","Type":"ContainerStarted","Data":"5c3d5a2f0c08caa11b3efe5f7dadcab2f42f5d3eecfcc331eaac28aadfec2f57"} Feb 26 11:50:04 crc kubenswrapper[4699]: I0226 11:50:04.467130 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535110-g2n8d" podStartSLOduration=1.3769225600000001 podStartE2EDuration="4.46709568s" podCreationTimestamp="2026-02-26 11:50:00 +0000 UTC" firstStartedPulling="2026-02-26 11:50:00.966701521 +0000 UTC m=+2346.777527955" lastFinishedPulling="2026-02-26 11:50:04.056874641 +0000 UTC m=+2349.867701075" observedRunningTime="2026-02-26 11:50:04.459552103 +0000 UTC m=+2350.270378527" watchObservedRunningTime="2026-02-26 11:50:04.46709568 +0000 UTC m=+2350.277922124" Feb 26 11:50:05 crc kubenswrapper[4699]: I0226 11:50:05.458276 4699 generic.go:334] "Generic (PLEG): container finished" podID="e270a2c1-b1c4-498d-9adf-a3cbb51defce" containerID="5c3d5a2f0c08caa11b3efe5f7dadcab2f42f5d3eecfcc331eaac28aadfec2f57" exitCode=0 Feb 26 11:50:05 crc kubenswrapper[4699]: I0226 11:50:05.458328 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535110-g2n8d" event={"ID":"e270a2c1-b1c4-498d-9adf-a3cbb51defce","Type":"ContainerDied","Data":"5c3d5a2f0c08caa11b3efe5f7dadcab2f42f5d3eecfcc331eaac28aadfec2f57"} Feb 26 11:50:06 crc kubenswrapper[4699]: I0226 11:50:06.783481 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535110-g2n8d" Feb 26 11:50:06 crc kubenswrapper[4699]: I0226 11:50:06.797896 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtf98\" (UniqueName: \"kubernetes.io/projected/e270a2c1-b1c4-498d-9adf-a3cbb51defce-kube-api-access-gtf98\") pod \"e270a2c1-b1c4-498d-9adf-a3cbb51defce\" (UID: \"e270a2c1-b1c4-498d-9adf-a3cbb51defce\") " Feb 26 11:50:06 crc kubenswrapper[4699]: I0226 11:50:06.805211 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e270a2c1-b1c4-498d-9adf-a3cbb51defce-kube-api-access-gtf98" (OuterVolumeSpecName: "kube-api-access-gtf98") pod "e270a2c1-b1c4-498d-9adf-a3cbb51defce" (UID: "e270a2c1-b1c4-498d-9adf-a3cbb51defce"). InnerVolumeSpecName "kube-api-access-gtf98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:50:06 crc kubenswrapper[4699]: I0226 11:50:06.899633 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtf98\" (UniqueName: \"kubernetes.io/projected/e270a2c1-b1c4-498d-9adf-a3cbb51defce-kube-api-access-gtf98\") on node \"crc\" DevicePath \"\"" Feb 26 11:50:07 crc kubenswrapper[4699]: I0226 11:50:07.476848 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535110-g2n8d" event={"ID":"e270a2c1-b1c4-498d-9adf-a3cbb51defce","Type":"ContainerDied","Data":"c9a043ff431ac00b2450dceb37f7050fd1f84e1b7f33e8aa986d6da1ce100586"} Feb 26 11:50:07 crc kubenswrapper[4699]: I0226 11:50:07.477200 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9a043ff431ac00b2450dceb37f7050fd1f84e1b7f33e8aa986d6da1ce100586" Feb 26 11:50:07 crc kubenswrapper[4699]: I0226 11:50:07.476924 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535110-g2n8d" Feb 26 11:50:07 crc kubenswrapper[4699]: I0226 11:50:07.534022 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535104-r58dw"] Feb 26 11:50:07 crc kubenswrapper[4699]: I0226 11:50:07.544914 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535104-r58dw"] Feb 26 11:50:08 crc kubenswrapper[4699]: I0226 11:50:08.271658 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a59d7ac-e643-4693-9c6b-994f1fadd83d" path="/var/lib/kubelet/pods/3a59d7ac-e643-4693-9c6b-994f1fadd83d/volumes" Feb 26 11:50:18 crc kubenswrapper[4699]: I0226 11:50:18.262091 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:50:18 crc kubenswrapper[4699]: E0226 11:50:18.265626 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:50:29 crc kubenswrapper[4699]: I0226 11:50:29.261572 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:50:29 crc kubenswrapper[4699]: E0226 11:50:29.264431 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:50:31 crc kubenswrapper[4699]: I0226 11:50:31.508105 4699 scope.go:117] "RemoveContainer" containerID="6dd92189791b2617628aa3e717314eb02f69fda3f8d5e7e8ceb2bcddb537435f" Feb 26 11:50:40 crc kubenswrapper[4699]: I0226 11:50:40.261207 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:50:40 crc kubenswrapper[4699]: E0226 11:50:40.262046 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:50:52 crc kubenswrapper[4699]: I0226 11:50:52.261034 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:50:52 crc kubenswrapper[4699]: E0226 11:50:52.262054 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:51:07 crc kubenswrapper[4699]: I0226 11:51:07.261240 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:51:07 crc kubenswrapper[4699]: E0226 11:51:07.262175 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:51:21 crc kubenswrapper[4699]: I0226 11:51:21.261269 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:51:21 crc kubenswrapper[4699]: E0226 11:51:21.262142 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:51:32 crc kubenswrapper[4699]: I0226 11:51:32.261901 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:51:32 crc kubenswrapper[4699]: E0226 11:51:32.263073 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:51:46 crc kubenswrapper[4699]: I0226 11:51:46.266055 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:51:46 crc kubenswrapper[4699]: E0226 11:51:46.266986 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:51:57 crc kubenswrapper[4699]: I0226 11:51:57.260946 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:51:57 crc kubenswrapper[4699]: E0226 11:51:57.262004 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.161919 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535112-jg5nd"] Feb 26 11:52:00 crc kubenswrapper[4699]: E0226 11:52:00.163267 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e270a2c1-b1c4-498d-9adf-a3cbb51defce" containerName="oc" Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.163294 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e270a2c1-b1c4-498d-9adf-a3cbb51defce" containerName="oc" Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.163597 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e270a2c1-b1c4-498d-9adf-a3cbb51defce" containerName="oc" Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.164310 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535112-jg5nd" Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.169451 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.171607 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.171867 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.188105 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535112-jg5nd"] Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.298342 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qvmb\" (UniqueName: \"kubernetes.io/projected/5ae8ab95-85cc-473a-bbe7-6065a75e5720-kube-api-access-7qvmb\") pod \"auto-csr-approver-29535112-jg5nd\" (UID: \"5ae8ab95-85cc-473a-bbe7-6065a75e5720\") " pod="openshift-infra/auto-csr-approver-29535112-jg5nd" Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.553010 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qvmb\" (UniqueName: \"kubernetes.io/projected/5ae8ab95-85cc-473a-bbe7-6065a75e5720-kube-api-access-7qvmb\") pod \"auto-csr-approver-29535112-jg5nd\" (UID: \"5ae8ab95-85cc-473a-bbe7-6065a75e5720\") " pod="openshift-infra/auto-csr-approver-29535112-jg5nd" Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.582731 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qvmb\" (UniqueName: \"kubernetes.io/projected/5ae8ab95-85cc-473a-bbe7-6065a75e5720-kube-api-access-7qvmb\") pod \"auto-csr-approver-29535112-jg5nd\" (UID: \"5ae8ab95-85cc-473a-bbe7-6065a75e5720\") " pod="openshift-infra/auto-csr-approver-29535112-jg5nd" Feb 26 11:52:00 crc kubenswrapper[4699]: I0226 11:52:00.784422 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535112-jg5nd" Feb 26 11:52:01 crc kubenswrapper[4699]: I0226 11:52:01.281516 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535112-jg5nd"] Feb 26 11:52:01 crc kubenswrapper[4699]: I0226 11:52:01.691281 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535112-jg5nd" event={"ID":"5ae8ab95-85cc-473a-bbe7-6065a75e5720","Type":"ContainerStarted","Data":"df32e428e87e50ae5b5d52fe0abccaf14b1261cbb9be90b6f95e6b5819312938"} Feb 26 11:52:03 crc kubenswrapper[4699]: I0226 11:52:03.763800 4699 generic.go:334] "Generic (PLEG): container finished" podID="5ae8ab95-85cc-473a-bbe7-6065a75e5720" containerID="e246a9fcedf1306ea4a405c16944f8ad4f9cf630b0ec81a4cd3160f4b051a918" exitCode=0 Feb 26 11:52:03 crc kubenswrapper[4699]: I0226 11:52:03.763891 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535112-jg5nd" event={"ID":"5ae8ab95-85cc-473a-bbe7-6065a75e5720","Type":"ContainerDied","Data":"e246a9fcedf1306ea4a405c16944f8ad4f9cf630b0ec81a4cd3160f4b051a918"} Feb 26 11:52:05 crc kubenswrapper[4699]: I0226 11:52:05.108190 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535112-jg5nd" Feb 26 11:52:05 crc kubenswrapper[4699]: I0226 11:52:05.226213 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qvmb\" (UniqueName: \"kubernetes.io/projected/5ae8ab95-85cc-473a-bbe7-6065a75e5720-kube-api-access-7qvmb\") pod \"5ae8ab95-85cc-473a-bbe7-6065a75e5720\" (UID: \"5ae8ab95-85cc-473a-bbe7-6065a75e5720\") " Feb 26 11:52:05 crc kubenswrapper[4699]: I0226 11:52:05.234547 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ae8ab95-85cc-473a-bbe7-6065a75e5720-kube-api-access-7qvmb" (OuterVolumeSpecName: "kube-api-access-7qvmb") pod "5ae8ab95-85cc-473a-bbe7-6065a75e5720" (UID: "5ae8ab95-85cc-473a-bbe7-6065a75e5720"). InnerVolumeSpecName "kube-api-access-7qvmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:52:05 crc kubenswrapper[4699]: I0226 11:52:05.408052 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qvmb\" (UniqueName: \"kubernetes.io/projected/5ae8ab95-85cc-473a-bbe7-6065a75e5720-kube-api-access-7qvmb\") on node \"crc\" DevicePath \"\"" Feb 26 11:52:05 crc kubenswrapper[4699]: I0226 11:52:05.782141 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535112-jg5nd" event={"ID":"5ae8ab95-85cc-473a-bbe7-6065a75e5720","Type":"ContainerDied","Data":"df32e428e87e50ae5b5d52fe0abccaf14b1261cbb9be90b6f95e6b5819312938"} Feb 26 11:52:05 crc kubenswrapper[4699]: I0226 11:52:05.782189 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df32e428e87e50ae5b5d52fe0abccaf14b1261cbb9be90b6f95e6b5819312938" Feb 26 11:52:05 crc kubenswrapper[4699]: I0226 11:52:05.782189 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535112-jg5nd" Feb 26 11:52:06 crc kubenswrapper[4699]: I0226 11:52:06.180913 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535106-cv2s5"] Feb 26 11:52:06 crc kubenswrapper[4699]: I0226 11:52:06.188445 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535106-cv2s5"] Feb 26 11:52:06 crc kubenswrapper[4699]: I0226 11:52:06.272064 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277ed376-d775-489c-82e7-93962bd513ff" path="/var/lib/kubelet/pods/277ed376-d775-489c-82e7-93962bd513ff/volumes" Feb 26 11:52:09 crc kubenswrapper[4699]: I0226 11:52:09.261418 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:52:09 crc kubenswrapper[4699]: E0226 11:52:09.261951 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:52:21 crc kubenswrapper[4699]: I0226 11:52:21.012861 4699 generic.go:334] "Generic (PLEG): container finished" podID="6436c321-6850-4db3-81b2-0dc329e10900" containerID="8bd3df01daa0942902ceb3f721b2d365aa21e62ede502d0a6f006ad1267cec53" exitCode=0 Feb 26 11:52:21 crc kubenswrapper[4699]: I0226 11:52:21.012967 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" event={"ID":"6436c321-6850-4db3-81b2-0dc329e10900","Type":"ContainerDied","Data":"8bd3df01daa0942902ceb3f721b2d365aa21e62ede502d0a6f006ad1267cec53"} Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.466022 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.659066 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57ppn\" (UniqueName: \"kubernetes.io/projected/6436c321-6850-4db3-81b2-0dc329e10900-kube-api-access-57ppn\") pod \"6436c321-6850-4db3-81b2-0dc329e10900\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.659201 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-ssh-key-openstack-edpm-ipam\") pod \"6436c321-6850-4db3-81b2-0dc329e10900\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.659423 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-secret-0\") pod \"6436c321-6850-4db3-81b2-0dc329e10900\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.659570 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-combined-ca-bundle\") pod \"6436c321-6850-4db3-81b2-0dc329e10900\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.659700 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-inventory\") pod \"6436c321-6850-4db3-81b2-0dc329e10900\" (UID: \"6436c321-6850-4db3-81b2-0dc329e10900\") " Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.665723 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6436c321-6850-4db3-81b2-0dc329e10900-kube-api-access-57ppn" (OuterVolumeSpecName: "kube-api-access-57ppn") pod "6436c321-6850-4db3-81b2-0dc329e10900" (UID: "6436c321-6850-4db3-81b2-0dc329e10900"). InnerVolumeSpecName "kube-api-access-57ppn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.666321 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6436c321-6850-4db3-81b2-0dc329e10900" (UID: "6436c321-6850-4db3-81b2-0dc329e10900"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.691976 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6436c321-6850-4db3-81b2-0dc329e10900" (UID: "6436c321-6850-4db3-81b2-0dc329e10900"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.695334 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-inventory" (OuterVolumeSpecName: "inventory") pod "6436c321-6850-4db3-81b2-0dc329e10900" (UID: "6436c321-6850-4db3-81b2-0dc329e10900"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.698946 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "6436c321-6850-4db3-81b2-0dc329e10900" (UID: "6436c321-6850-4db3-81b2-0dc329e10900"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.762514 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.762717 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57ppn\" (UniqueName: \"kubernetes.io/projected/6436c321-6850-4db3-81b2-0dc329e10900-kube-api-access-57ppn\") on node \"crc\" DevicePath \"\"" Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.762793 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.762857 4699 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:52:22 crc kubenswrapper[4699]: I0226 11:52:22.762921 4699 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6436c321-6850-4db3-81b2-0dc329e10900-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.033778 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" event={"ID":"6436c321-6850-4db3-81b2-0dc329e10900","Type":"ContainerDied","Data":"da7eabc20b73f3cfcb5f479d6c26b5a779dbd02d4697b1b42ef3653df7b2ae5b"} Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.033932 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.034110 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da7eabc20b73f3cfcb5f479d6c26b5a779dbd02d4697b1b42ef3653df7b2ae5b" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.149482 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666"] Feb 26 11:52:23 crc kubenswrapper[4699]: E0226 11:52:23.150721 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6436c321-6850-4db3-81b2-0dc329e10900" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.150753 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6436c321-6850-4db3-81b2-0dc329e10900" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 11:52:23 crc kubenswrapper[4699]: E0226 11:52:23.150795 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae8ab95-85cc-473a-bbe7-6065a75e5720" containerName="oc" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.150804 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae8ab95-85cc-473a-bbe7-6065a75e5720" containerName="oc" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.151079 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ae8ab95-85cc-473a-bbe7-6065a75e5720" containerName="oc" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.151147 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6436c321-6850-4db3-81b2-0dc329e10900" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.152065 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.155040 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.157643 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.166206 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.261494 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.261580 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.261659 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.261893 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.304479 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666"] Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.313480 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.313582 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.313752 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xnjl\" (UniqueName: \"kubernetes.io/projected/2c2e8329-038c-4347-b30f-f8b42f36cc67-kube-api-access-8xnjl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.313858 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.313899 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.313955 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.313993 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.314074 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.314132 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.314236 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.314322 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.416357 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.416429 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.416510 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xnjl\" (UniqueName: \"kubernetes.io/projected/2c2e8329-038c-4347-b30f-f8b42f36cc67-kube-api-access-8xnjl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.416562 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.416589 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.416622 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.416663 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.416710 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.416739 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.416794 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.416842 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.419103 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.422601 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.422634 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.422680 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.423046 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.423566 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.424018 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.425726 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.428710 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.428982 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.436604 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xnjl\" (UniqueName: \"kubernetes.io/projected/2c2e8329-038c-4347-b30f-f8b42f36cc67-kube-api-access-8xnjl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wv666\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:23 crc kubenswrapper[4699]: I0226 11:52:23.608397 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:52:24 crc kubenswrapper[4699]: I0226 11:52:24.160209 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666"] Feb 26 11:52:24 crc kubenswrapper[4699]: I0226 11:52:24.163236 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:52:24 crc kubenswrapper[4699]: I0226 11:52:24.261289 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:52:24 crc kubenswrapper[4699]: E0226 11:52:24.261603 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:52:25 crc kubenswrapper[4699]: I0226 11:52:25.052467 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" event={"ID":"2c2e8329-038c-4347-b30f-f8b42f36cc67","Type":"ContainerStarted","Data":"c9492e99db4aabcb6a5c3c841ccddee9f07e9207f1e35227acfbe163ff34fec2"} Feb 26 11:52:26 crc kubenswrapper[4699]: I0226 11:52:26.061188 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" event={"ID":"2c2e8329-038c-4347-b30f-f8b42f36cc67","Type":"ContainerStarted","Data":"9e14db39e9ca42779c03c0f56859f1620acf93ef1802275caf2edd19f2d27624"} Feb 26 11:52:26 crc kubenswrapper[4699]: I0226 11:52:26.082047 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" podStartSLOduration=2.165358527 podStartE2EDuration="3.082016028s" podCreationTimestamp="2026-02-26 11:52:23 +0000 UTC" firstStartedPulling="2026-02-26 11:52:24.162881586 +0000 UTC m=+2489.973708020" lastFinishedPulling="2026-02-26 11:52:25.079539087 +0000 UTC m=+2490.890365521" observedRunningTime="2026-02-26 11:52:26.079461838 +0000 UTC m=+2491.890288272" watchObservedRunningTime="2026-02-26 11:52:26.082016028 +0000 UTC m=+2491.892842462" Feb 26 11:52:31 crc kubenswrapper[4699]: I0226 11:52:31.620830 4699 scope.go:117] "RemoveContainer" containerID="6316bd489dab2ee525da2e5168f12e3d42a5b7c5139e77da702337350ea3b44a" Feb 26 11:52:38 crc kubenswrapper[4699]: I0226 11:52:38.261331 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:52:38 crc kubenswrapper[4699]: E0226 11:52:38.262433 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:52:50 crc kubenswrapper[4699]: I0226 11:52:50.261454 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:52:50 crc kubenswrapper[4699]: E0226 11:52:50.262396 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:53:05 crc kubenswrapper[4699]: I0226 11:53:05.261670 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:53:05 crc kubenswrapper[4699]: E0226 11:53:05.262468 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.785518 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l7242"] Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.787934 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.798922 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2wpm\" (UniqueName: \"kubernetes.io/projected/8bffc9ae-b2b5-473a-8876-958983e1b5cc-kube-api-access-w2wpm\") pod \"community-operators-l7242\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.798989 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-utilities\") pod \"community-operators-l7242\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.799014 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-catalog-content\") pod \"community-operators-l7242\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.806502 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l7242"] Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.902177 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2wpm\" (UniqueName: \"kubernetes.io/projected/8bffc9ae-b2b5-473a-8876-958983e1b5cc-kube-api-access-w2wpm\") pod \"community-operators-l7242\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.902290 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-utilities\") pod \"community-operators-l7242\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.902334 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-catalog-content\") pod \"community-operators-l7242\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.903261 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-utilities\") pod \"community-operators-l7242\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.903286 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-catalog-content\") pod \"community-operators-l7242\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:08 crc kubenswrapper[4699]: I0226 11:53:08.924265 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2wpm\" (UniqueName: \"kubernetes.io/projected/8bffc9ae-b2b5-473a-8876-958983e1b5cc-kube-api-access-w2wpm\") pod \"community-operators-l7242\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:09 crc kubenswrapper[4699]: I0226 11:53:09.110524 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:09 crc kubenswrapper[4699]: I0226 11:53:09.564008 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l7242"] Feb 26 11:53:10 crc kubenswrapper[4699]: I0226 11:53:10.427026 4699 generic.go:334] "Generic (PLEG): container finished" podID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" containerID="9e7a7cd6623aa96bb6b9ff153e54ceb4b7943dc8d7122fd11cbfc89df20ba577" exitCode=0 Feb 26 11:53:10 crc kubenswrapper[4699]: I0226 11:53:10.427093 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7242" event={"ID":"8bffc9ae-b2b5-473a-8876-958983e1b5cc","Type":"ContainerDied","Data":"9e7a7cd6623aa96bb6b9ff153e54ceb4b7943dc8d7122fd11cbfc89df20ba577"} Feb 26 11:53:10 crc kubenswrapper[4699]: I0226 11:53:10.427339 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7242" event={"ID":"8bffc9ae-b2b5-473a-8876-958983e1b5cc","Type":"ContainerStarted","Data":"587c06830f08110563d11bd959f9ca1b7f81ea2321b8b0fba829c5801e29fcb7"} Feb 26 11:53:12 crc kubenswrapper[4699]: I0226 11:53:12.443326 4699 generic.go:334] "Generic (PLEG): container finished" podID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" containerID="f7690aff2701e37432ac7b6d5a7d18421e13b42c997817acc6658e7e5f9b41e0" exitCode=0 Feb 26 11:53:12 crc kubenswrapper[4699]: I0226 11:53:12.443432 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7242" event={"ID":"8bffc9ae-b2b5-473a-8876-958983e1b5cc","Type":"ContainerDied","Data":"f7690aff2701e37432ac7b6d5a7d18421e13b42c997817acc6658e7e5f9b41e0"} Feb 26 11:53:13 crc kubenswrapper[4699]: I0226 11:53:13.456908 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7242" event={"ID":"8bffc9ae-b2b5-473a-8876-958983e1b5cc","Type":"ContainerStarted","Data":"bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1"} Feb 26 11:53:13 crc kubenswrapper[4699]: I0226 11:53:13.487211 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l7242" podStartSLOduration=3.051567778 podStartE2EDuration="5.487188083s" podCreationTimestamp="2026-02-26 11:53:08 +0000 UTC" firstStartedPulling="2026-02-26 11:53:10.429267525 +0000 UTC m=+2536.240093959" lastFinishedPulling="2026-02-26 11:53:12.86488782 +0000 UTC m=+2538.675714264" observedRunningTime="2026-02-26 11:53:13.476858737 +0000 UTC m=+2539.287685221" watchObservedRunningTime="2026-02-26 11:53:13.487188083 +0000 UTC m=+2539.298014517" Feb 26 11:53:17 crc kubenswrapper[4699]: I0226 11:53:17.260689 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:53:17 crc kubenswrapper[4699]: E0226 11:53:17.261901 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:53:19 crc kubenswrapper[4699]: I0226 11:53:19.111061 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:19 crc kubenswrapper[4699]: I0226 11:53:19.111582 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:19 crc kubenswrapper[4699]: I0226 11:53:19.155944 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:19 crc kubenswrapper[4699]: I0226 11:53:19.574617 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:19 crc kubenswrapper[4699]: I0226 11:53:19.620658 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l7242"] Feb 26 11:53:21 crc kubenswrapper[4699]: I0226 11:53:21.542967 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l7242" podUID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" containerName="registry-server" containerID="cri-o://bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1" gracePeriod=2 Feb 26 11:53:21 crc kubenswrapper[4699]: I0226 11:53:21.944347 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.080004 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-utilities\") pod \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.080166 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2wpm\" (UniqueName: \"kubernetes.io/projected/8bffc9ae-b2b5-473a-8876-958983e1b5cc-kube-api-access-w2wpm\") pod \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.080196 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-catalog-content\") pod \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\" (UID: \"8bffc9ae-b2b5-473a-8876-958983e1b5cc\") " Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.081045 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-utilities" (OuterVolumeSpecName: "utilities") pod "8bffc9ae-b2b5-473a-8876-958983e1b5cc" (UID: "8bffc9ae-b2b5-473a-8876-958983e1b5cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.086406 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bffc9ae-b2b5-473a-8876-958983e1b5cc-kube-api-access-w2wpm" (OuterVolumeSpecName: "kube-api-access-w2wpm") pod "8bffc9ae-b2b5-473a-8876-958983e1b5cc" (UID: "8bffc9ae-b2b5-473a-8876-958983e1b5cc"). InnerVolumeSpecName "kube-api-access-w2wpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.143512 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bffc9ae-b2b5-473a-8876-958983e1b5cc" (UID: "8bffc9ae-b2b5-473a-8876-958983e1b5cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.182867 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2wpm\" (UniqueName: \"kubernetes.io/projected/8bffc9ae-b2b5-473a-8876-958983e1b5cc-kube-api-access-w2wpm\") on node \"crc\" DevicePath \"\"" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.182910 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.182920 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bffc9ae-b2b5-473a-8876-958983e1b5cc-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.553681 4699 generic.go:334] "Generic (PLEG): container finished" podID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" containerID="bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1" exitCode=0 Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.553741 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7242" event={"ID":"8bffc9ae-b2b5-473a-8876-958983e1b5cc","Type":"ContainerDied","Data":"bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1"} Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.553786 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7242" event={"ID":"8bffc9ae-b2b5-473a-8876-958983e1b5cc","Type":"ContainerDied","Data":"587c06830f08110563d11bd959f9ca1b7f81ea2321b8b0fba829c5801e29fcb7"} Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.553809 4699 scope.go:117] "RemoveContainer" containerID="bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.554891 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7242" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.581598 4699 scope.go:117] "RemoveContainer" containerID="f7690aff2701e37432ac7b6d5a7d18421e13b42c997817acc6658e7e5f9b41e0" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.604247 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l7242"] Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.613261 4699 scope.go:117] "RemoveContainer" containerID="9e7a7cd6623aa96bb6b9ff153e54ceb4b7943dc8d7122fd11cbfc89df20ba577" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.616255 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l7242"] Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.648261 4699 scope.go:117] "RemoveContainer" containerID="bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1" Feb 26 11:53:22 crc kubenswrapper[4699]: E0226 11:53:22.648787 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1\": container with ID starting with bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1 not found: ID does not exist" containerID="bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.648835 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1"} err="failed to get container status \"bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1\": rpc error: code = NotFound desc = could not find container \"bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1\": container with ID starting with bc50d20d4888831a6bedb5c285335eef8a68e1d761755841550d21819f1014e1 not found: ID does not exist" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.648866 4699 scope.go:117] "RemoveContainer" containerID="f7690aff2701e37432ac7b6d5a7d18421e13b42c997817acc6658e7e5f9b41e0" Feb 26 11:53:22 crc kubenswrapper[4699]: E0226 11:53:22.649256 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7690aff2701e37432ac7b6d5a7d18421e13b42c997817acc6658e7e5f9b41e0\": container with ID starting with f7690aff2701e37432ac7b6d5a7d18421e13b42c997817acc6658e7e5f9b41e0 not found: ID does not exist" containerID="f7690aff2701e37432ac7b6d5a7d18421e13b42c997817acc6658e7e5f9b41e0" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.649294 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7690aff2701e37432ac7b6d5a7d18421e13b42c997817acc6658e7e5f9b41e0"} err="failed to get container status \"f7690aff2701e37432ac7b6d5a7d18421e13b42c997817acc6658e7e5f9b41e0\": rpc error: code = NotFound desc = could not find container \"f7690aff2701e37432ac7b6d5a7d18421e13b42c997817acc6658e7e5f9b41e0\": container with ID starting with f7690aff2701e37432ac7b6d5a7d18421e13b42c997817acc6658e7e5f9b41e0 not found: ID does not exist" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.649321 4699 scope.go:117] "RemoveContainer" containerID="9e7a7cd6623aa96bb6b9ff153e54ceb4b7943dc8d7122fd11cbfc89df20ba577" Feb 26 11:53:22 crc kubenswrapper[4699]: E0226 11:53:22.649582 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e7a7cd6623aa96bb6b9ff153e54ceb4b7943dc8d7122fd11cbfc89df20ba577\": container with ID starting with 9e7a7cd6623aa96bb6b9ff153e54ceb4b7943dc8d7122fd11cbfc89df20ba577 not found: ID does not exist" containerID="9e7a7cd6623aa96bb6b9ff153e54ceb4b7943dc8d7122fd11cbfc89df20ba577" Feb 26 11:53:22 crc kubenswrapper[4699]: I0226 11:53:22.649610 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e7a7cd6623aa96bb6b9ff153e54ceb4b7943dc8d7122fd11cbfc89df20ba577"} err="failed to get container status \"9e7a7cd6623aa96bb6b9ff153e54ceb4b7943dc8d7122fd11cbfc89df20ba577\": rpc error: code = NotFound desc = could not find container \"9e7a7cd6623aa96bb6b9ff153e54ceb4b7943dc8d7122fd11cbfc89df20ba577\": container with ID starting with 9e7a7cd6623aa96bb6b9ff153e54ceb4b7943dc8d7122fd11cbfc89df20ba577 not found: ID does not exist" Feb 26 11:53:24 crc kubenswrapper[4699]: I0226 11:53:24.273750 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" path="/var/lib/kubelet/pods/8bffc9ae-b2b5-473a-8876-958983e1b5cc/volumes" Feb 26 11:53:31 crc kubenswrapper[4699]: I0226 11:53:31.262447 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:53:31 crc kubenswrapper[4699]: E0226 11:53:31.263648 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:53:44 crc kubenswrapper[4699]: I0226 11:53:44.261057 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:53:44 crc kubenswrapper[4699]: E0226 11:53:44.262448 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:53:58 crc kubenswrapper[4699]: I0226 11:53:58.261687 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:53:58 crc kubenswrapper[4699]: E0226 11:53:58.262609 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.152356 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535114-zp6df"] Feb 26 11:54:00 crc kubenswrapper[4699]: E0226 11:54:00.153242 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" containerName="extract-utilities" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.153258 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" containerName="extract-utilities" Feb 26 11:54:00 crc kubenswrapper[4699]: E0226 11:54:00.153285 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" containerName="extract-content" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.153292 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" containerName="extract-content" Feb 26 11:54:00 crc kubenswrapper[4699]: E0226 11:54:00.153307 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" containerName="registry-server" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.153314 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" containerName="registry-server" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.153585 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bffc9ae-b2b5-473a-8876-958983e1b5cc" containerName="registry-server" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.154275 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535114-zp6df" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.161546 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535114-zp6df"] Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.161566 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.161851 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.161917 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.251910 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jntw7\" (UniqueName: \"kubernetes.io/projected/c8acba14-233d-44a8-98b6-93df64a45300-kube-api-access-jntw7\") pod \"auto-csr-approver-29535114-zp6df\" (UID: \"c8acba14-233d-44a8-98b6-93df64a45300\") " pod="openshift-infra/auto-csr-approver-29535114-zp6df" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.354080 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jntw7\" (UniqueName: \"kubernetes.io/projected/c8acba14-233d-44a8-98b6-93df64a45300-kube-api-access-jntw7\") pod \"auto-csr-approver-29535114-zp6df\" (UID: \"c8acba14-233d-44a8-98b6-93df64a45300\") " pod="openshift-infra/auto-csr-approver-29535114-zp6df" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.382916 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jntw7\" (UniqueName: \"kubernetes.io/projected/c8acba14-233d-44a8-98b6-93df64a45300-kube-api-access-jntw7\") pod \"auto-csr-approver-29535114-zp6df\" (UID: \"c8acba14-233d-44a8-98b6-93df64a45300\") " pod="openshift-infra/auto-csr-approver-29535114-zp6df" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.490433 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535114-zp6df" Feb 26 11:54:00 crc kubenswrapper[4699]: I0226 11:54:00.958694 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535114-zp6df"] Feb 26 11:54:01 crc kubenswrapper[4699]: I0226 11:54:01.896419 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535114-zp6df" event={"ID":"c8acba14-233d-44a8-98b6-93df64a45300","Type":"ContainerStarted","Data":"c34504929420ae9c1342ec0cb9206d4652cb123907ec8d9973ed44375f7fd77d"} Feb 26 11:54:02 crc kubenswrapper[4699]: I0226 11:54:02.907085 4699 generic.go:334] "Generic (PLEG): container finished" podID="c8acba14-233d-44a8-98b6-93df64a45300" containerID="ffa425939368131f51ca5df0c799cff39019457552b4886c8f2b5719e7868319" exitCode=0 Feb 26 11:54:02 crc kubenswrapper[4699]: I0226 11:54:02.907155 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535114-zp6df" event={"ID":"c8acba14-233d-44a8-98b6-93df64a45300","Type":"ContainerDied","Data":"ffa425939368131f51ca5df0c799cff39019457552b4886c8f2b5719e7868319"} Feb 26 11:54:04 crc kubenswrapper[4699]: I0226 11:54:04.245689 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535114-zp6df" Feb 26 11:54:04 crc kubenswrapper[4699]: I0226 11:54:04.338491 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jntw7\" (UniqueName: \"kubernetes.io/projected/c8acba14-233d-44a8-98b6-93df64a45300-kube-api-access-jntw7\") pod \"c8acba14-233d-44a8-98b6-93df64a45300\" (UID: \"c8acba14-233d-44a8-98b6-93df64a45300\") " Feb 26 11:54:04 crc kubenswrapper[4699]: I0226 11:54:04.345350 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8acba14-233d-44a8-98b6-93df64a45300-kube-api-access-jntw7" (OuterVolumeSpecName: "kube-api-access-jntw7") pod "c8acba14-233d-44a8-98b6-93df64a45300" (UID: "c8acba14-233d-44a8-98b6-93df64a45300"). InnerVolumeSpecName "kube-api-access-jntw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:54:04 crc kubenswrapper[4699]: I0226 11:54:04.442276 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jntw7\" (UniqueName: \"kubernetes.io/projected/c8acba14-233d-44a8-98b6-93df64a45300-kube-api-access-jntw7\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:04 crc kubenswrapper[4699]: I0226 11:54:04.926760 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535114-zp6df" event={"ID":"c8acba14-233d-44a8-98b6-93df64a45300","Type":"ContainerDied","Data":"c34504929420ae9c1342ec0cb9206d4652cb123907ec8d9973ed44375f7fd77d"} Feb 26 11:54:04 crc kubenswrapper[4699]: I0226 11:54:04.926799 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c34504929420ae9c1342ec0cb9206d4652cb123907ec8d9973ed44375f7fd77d" Feb 26 11:54:04 crc kubenswrapper[4699]: I0226 11:54:04.926853 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535114-zp6df" Feb 26 11:54:05 crc kubenswrapper[4699]: I0226 11:54:05.313105 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535108-79cdj"] Feb 26 11:54:05 crc kubenswrapper[4699]: I0226 11:54:05.320636 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535108-79cdj"] Feb 26 11:54:06 crc kubenswrapper[4699]: I0226 11:54:06.272034 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6366100d-f68c-43ce-879b-4cc3f80c8156" path="/var/lib/kubelet/pods/6366100d-f68c-43ce-879b-4cc3f80c8156/volumes" Feb 26 11:54:09 crc kubenswrapper[4699]: I0226 11:54:09.261705 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:54:09 crc kubenswrapper[4699]: E0226 11:54:09.262395 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 11:54:23 crc kubenswrapper[4699]: I0226 11:54:23.261462 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:54:24 crc kubenswrapper[4699]: I0226 11:54:24.091955 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"4c58903dcb4a12909fcb0583a1d55149dc4c175867d594a32d984dde51ae536f"} Feb 26 11:54:31 crc kubenswrapper[4699]: I0226 11:54:31.754605 4699 scope.go:117] "RemoveContainer" containerID="a49e0f6a5b8aa98c17ff2dc316f41da6f3d780c3f18aaef30599837dcc6bc0ea" Feb 26 11:54:42 crc kubenswrapper[4699]: I0226 11:54:42.891171 4699 generic.go:334] "Generic (PLEG): container finished" podID="2c2e8329-038c-4347-b30f-f8b42f36cc67" containerID="9e14db39e9ca42779c03c0f56859f1620acf93ef1802275caf2edd19f2d27624" exitCode=0 Feb 26 11:54:42 crc kubenswrapper[4699]: I0226 11:54:42.891256 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" event={"ID":"2c2e8329-038c-4347-b30f-f8b42f36cc67","Type":"ContainerDied","Data":"9e14db39e9ca42779c03c0f56859f1620acf93ef1802275caf2edd19f2d27624"} Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.338788 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.481468 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-0\") pod \"2c2e8329-038c-4347-b30f-f8b42f36cc67\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.481569 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-1\") pod \"2c2e8329-038c-4347-b30f-f8b42f36cc67\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.481606 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-combined-ca-bundle\") pod \"2c2e8329-038c-4347-b30f-f8b42f36cc67\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.481636 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-1\") pod \"2c2e8329-038c-4347-b30f-f8b42f36cc67\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.481704 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-2\") pod \"2c2e8329-038c-4347-b30f-f8b42f36cc67\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.481732 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-inventory\") pod \"2c2e8329-038c-4347-b30f-f8b42f36cc67\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.481758 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-ssh-key-openstack-edpm-ipam\") pod \"2c2e8329-038c-4347-b30f-f8b42f36cc67\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.481800 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-3\") pod \"2c2e8329-038c-4347-b30f-f8b42f36cc67\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.481845 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-0\") pod \"2c2e8329-038c-4347-b30f-f8b42f36cc67\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.481885 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-extra-config-0\") pod \"2c2e8329-038c-4347-b30f-f8b42f36cc67\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.482031 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xnjl\" (UniqueName: \"kubernetes.io/projected/2c2e8329-038c-4347-b30f-f8b42f36cc67-kube-api-access-8xnjl\") pod \"2c2e8329-038c-4347-b30f-f8b42f36cc67\" (UID: \"2c2e8329-038c-4347-b30f-f8b42f36cc67\") " Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.489490 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2c2e8329-038c-4347-b30f-f8b42f36cc67" (UID: "2c2e8329-038c-4347-b30f-f8b42f36cc67"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.490731 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c2e8329-038c-4347-b30f-f8b42f36cc67-kube-api-access-8xnjl" (OuterVolumeSpecName: "kube-api-access-8xnjl") pod "2c2e8329-038c-4347-b30f-f8b42f36cc67" (UID: "2c2e8329-038c-4347-b30f-f8b42f36cc67"). InnerVolumeSpecName "kube-api-access-8xnjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.514262 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2c2e8329-038c-4347-b30f-f8b42f36cc67" (UID: "2c2e8329-038c-4347-b30f-f8b42f36cc67"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.520242 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2c2e8329-038c-4347-b30f-f8b42f36cc67" (UID: "2c2e8329-038c-4347-b30f-f8b42f36cc67"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.520266 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "2c2e8329-038c-4347-b30f-f8b42f36cc67" (UID: "2c2e8329-038c-4347-b30f-f8b42f36cc67"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.521469 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2c2e8329-038c-4347-b30f-f8b42f36cc67" (UID: "2c2e8329-038c-4347-b30f-f8b42f36cc67"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.526039 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "2c2e8329-038c-4347-b30f-f8b42f36cc67" (UID: "2c2e8329-038c-4347-b30f-f8b42f36cc67"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.531066 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "2c2e8329-038c-4347-b30f-f8b42f36cc67" (UID: "2c2e8329-038c-4347-b30f-f8b42f36cc67"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.531066 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-inventory" (OuterVolumeSpecName: "inventory") pod "2c2e8329-038c-4347-b30f-f8b42f36cc67" (UID: "2c2e8329-038c-4347-b30f-f8b42f36cc67"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.532345 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2c2e8329-038c-4347-b30f-f8b42f36cc67" (UID: "2c2e8329-038c-4347-b30f-f8b42f36cc67"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.533890 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2c2e8329-038c-4347-b30f-f8b42f36cc67" (UID: "2c2e8329-038c-4347-b30f-f8b42f36cc67"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.587517 4699 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.587988 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.588159 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.588266 4699 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.588373 4699 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.588466 4699 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.588548 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xnjl\" (UniqueName: \"kubernetes.io/projected/2c2e8329-038c-4347-b30f-f8b42f36cc67-kube-api-access-8xnjl\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.588688 4699 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.588829 4699 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.588920 4699 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.589134 4699 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2c2e8329-038c-4347-b30f-f8b42f36cc67-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.910398 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" event={"ID":"2c2e8329-038c-4347-b30f-f8b42f36cc67","Type":"ContainerDied","Data":"c9492e99db4aabcb6a5c3c841ccddee9f07e9207f1e35227acfbe163ff34fec2"} Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.910440 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9492e99db4aabcb6a5c3c841ccddee9f07e9207f1e35227acfbe163ff34fec2" Feb 26 11:54:44 crc kubenswrapper[4699]: I0226 11:54:44.910461 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wv666" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.106688 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9"] Feb 26 11:54:45 crc kubenswrapper[4699]: E0226 11:54:45.107328 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2e8329-038c-4347-b30f-f8b42f36cc67" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.107351 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2e8329-038c-4347-b30f-f8b42f36cc67" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 11:54:45 crc kubenswrapper[4699]: E0226 11:54:45.107390 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8acba14-233d-44a8-98b6-93df64a45300" containerName="oc" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.107400 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8acba14-233d-44a8-98b6-93df64a45300" containerName="oc" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.107651 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8acba14-233d-44a8-98b6-93df64a45300" containerName="oc" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.107681 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2e8329-038c-4347-b30f-f8b42f36cc67" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.108491 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.113940 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.114309 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.114833 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.115061 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.115286 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-69sdb" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.124458 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9"] Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.205066 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.205308 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.205739 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.205854 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjd7\" (UniqueName: \"kubernetes.io/projected/08bdd16a-fc18-4262-9175-a05b613a76c9-kube-api-access-qjjd7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.205931 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.206037 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.206102 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.308501 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.309447 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjjd7\" (UniqueName: \"kubernetes.io/projected/08bdd16a-fc18-4262-9175-a05b613a76c9-kube-api-access-qjjd7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.309541 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.309703 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.309763 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.309872 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.309988 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.314723 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.328205 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.328396 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.328842 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.328993 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.329559 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.332813 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjjd7\" (UniqueName: \"kubernetes.io/projected/08bdd16a-fc18-4262-9175-a05b613a76c9-kube-api-access-qjjd7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.436003 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:54:45 crc kubenswrapper[4699]: I0226 11:54:45.981039 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9"] Feb 26 11:54:46 crc kubenswrapper[4699]: I0226 11:54:46.931533 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" event={"ID":"08bdd16a-fc18-4262-9175-a05b613a76c9","Type":"ContainerStarted","Data":"43d0f761beaf929bd7b88c678a07a81fe65b54f961758754c84f435ce7b8d8cb"} Feb 26 11:54:46 crc kubenswrapper[4699]: I0226 11:54:46.931941 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" event={"ID":"08bdd16a-fc18-4262-9175-a05b613a76c9","Type":"ContainerStarted","Data":"14466ace843fec23ce73560f034f012e5b9e5664261a1686721bb34a65e7ea16"} Feb 26 11:54:46 crc kubenswrapper[4699]: I0226 11:54:46.952585 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" podStartSLOduration=1.525734689 podStartE2EDuration="1.952564236s" podCreationTimestamp="2026-02-26 11:54:45 +0000 UTC" firstStartedPulling="2026-02-26 11:54:45.98275775 +0000 UTC m=+2631.793584184" lastFinishedPulling="2026-02-26 11:54:46.409587297 +0000 UTC m=+2632.220413731" observedRunningTime="2026-02-26 11:54:46.949571311 +0000 UTC m=+2632.760397765" watchObservedRunningTime="2026-02-26 11:54:46.952564236 +0000 UTC m=+2632.763390690" Feb 26 11:55:35 crc kubenswrapper[4699]: I0226 11:55:35.810727 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ms696"] Feb 26 11:55:35 crc kubenswrapper[4699]: I0226 11:55:35.813484 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:35 crc kubenswrapper[4699]: I0226 11:55:35.826316 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ms696"] Feb 26 11:55:35 crc kubenswrapper[4699]: I0226 11:55:35.944153 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-catalog-content\") pod \"certified-operators-ms696\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:35 crc kubenswrapper[4699]: I0226 11:55:35.945148 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwtbf\" (UniqueName: \"kubernetes.io/projected/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-kube-api-access-dwtbf\") pod \"certified-operators-ms696\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:35 crc kubenswrapper[4699]: I0226 11:55:35.945315 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-utilities\") pod \"certified-operators-ms696\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:36 crc kubenswrapper[4699]: I0226 11:55:36.047132 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-catalog-content\") pod \"certified-operators-ms696\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:36 crc kubenswrapper[4699]: I0226 11:55:36.047317 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwtbf\" (UniqueName: \"kubernetes.io/projected/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-kube-api-access-dwtbf\") pod \"certified-operators-ms696\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:36 crc kubenswrapper[4699]: I0226 11:55:36.047358 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-utilities\") pod \"certified-operators-ms696\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:36 crc kubenswrapper[4699]: I0226 11:55:36.047925 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-utilities\") pod \"certified-operators-ms696\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:36 crc kubenswrapper[4699]: I0226 11:55:36.047925 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-catalog-content\") pod \"certified-operators-ms696\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:36 crc kubenswrapper[4699]: I0226 11:55:36.069846 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwtbf\" (UniqueName: \"kubernetes.io/projected/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-kube-api-access-dwtbf\") pod \"certified-operators-ms696\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:36 crc kubenswrapper[4699]: I0226 11:55:36.143243 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:36 crc kubenswrapper[4699]: I0226 11:55:36.648505 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ms696"] Feb 26 11:55:37 crc kubenswrapper[4699]: I0226 11:55:37.394253 4699 generic.go:334] "Generic (PLEG): container finished" podID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" containerID="57058290b00f5d6b590d5a4c78114725d2a482c376a97f7adb8c6ad1e53c213e" exitCode=0 Feb 26 11:55:37 crc kubenswrapper[4699]: I0226 11:55:37.394321 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ms696" event={"ID":"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee","Type":"ContainerDied","Data":"57058290b00f5d6b590d5a4c78114725d2a482c376a97f7adb8c6ad1e53c213e"} Feb 26 11:55:37 crc kubenswrapper[4699]: I0226 11:55:37.394697 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ms696" event={"ID":"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee","Type":"ContainerStarted","Data":"f8058ebb930564641cfd3d71c132ace7ecff5864e9f26174f1874bbeeb27a955"} Feb 26 11:55:39 crc kubenswrapper[4699]: I0226 11:55:39.415915 4699 generic.go:334] "Generic (PLEG): container finished" podID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" containerID="46aeb100bd5a41ee978076bb87423c17006a41bb58ca71c78f257f6951501401" exitCode=0 Feb 26 11:55:39 crc kubenswrapper[4699]: I0226 11:55:39.416012 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ms696" event={"ID":"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee","Type":"ContainerDied","Data":"46aeb100bd5a41ee978076bb87423c17006a41bb58ca71c78f257f6951501401"} Feb 26 11:55:40 crc kubenswrapper[4699]: I0226 11:55:40.431798 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ms696" event={"ID":"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee","Type":"ContainerStarted","Data":"c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d"} Feb 26 11:55:40 crc kubenswrapper[4699]: I0226 11:55:40.464470 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ms696" podStartSLOduration=2.739494285 podStartE2EDuration="5.464446673s" podCreationTimestamp="2026-02-26 11:55:35 +0000 UTC" firstStartedPulling="2026-02-26 11:55:37.396728032 +0000 UTC m=+2683.207554466" lastFinishedPulling="2026-02-26 11:55:40.12168042 +0000 UTC m=+2685.932506854" observedRunningTime="2026-02-26 11:55:40.456456237 +0000 UTC m=+2686.267282701" watchObservedRunningTime="2026-02-26 11:55:40.464446673 +0000 UTC m=+2686.275273107" Feb 26 11:55:46 crc kubenswrapper[4699]: I0226 11:55:46.144315 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:46 crc kubenswrapper[4699]: I0226 11:55:46.144377 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:46 crc kubenswrapper[4699]: I0226 11:55:46.194360 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:46 crc kubenswrapper[4699]: I0226 11:55:46.527283 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:46 crc kubenswrapper[4699]: I0226 11:55:46.575867 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ms696"] Feb 26 11:55:48 crc kubenswrapper[4699]: I0226 11:55:48.499969 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ms696" podUID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" containerName="registry-server" containerID="cri-o://c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d" gracePeriod=2 Feb 26 11:55:48 crc kubenswrapper[4699]: I0226 11:55:48.958184 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.117695 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwtbf\" (UniqueName: \"kubernetes.io/projected/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-kube-api-access-dwtbf\") pod \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.118523 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-catalog-content\") pod \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.118666 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-utilities\") pod \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\" (UID: \"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee\") " Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.119681 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-utilities" (OuterVolumeSpecName: "utilities") pod "24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" (UID: "24a9d670-7b0f-45dc-ae64-5a2ef0c623ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.125526 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-kube-api-access-dwtbf" (OuterVolumeSpecName: "kube-api-access-dwtbf") pod "24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" (UID: "24a9d670-7b0f-45dc-ae64-5a2ef0c623ee"). InnerVolumeSpecName "kube-api-access-dwtbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.185690 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" (UID: "24a9d670-7b0f-45dc-ae64-5a2ef0c623ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.221174 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwtbf\" (UniqueName: \"kubernetes.io/projected/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-kube-api-access-dwtbf\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.221213 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.221223 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.512396 4699 generic.go:334] "Generic (PLEG): container finished" podID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" containerID="c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d" exitCode=0 Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.512444 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ms696" event={"ID":"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee","Type":"ContainerDied","Data":"c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d"} Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.512539 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ms696" event={"ID":"24a9d670-7b0f-45dc-ae64-5a2ef0c623ee","Type":"ContainerDied","Data":"f8058ebb930564641cfd3d71c132ace7ecff5864e9f26174f1874bbeeb27a955"} Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.512562 4699 scope.go:117] "RemoveContainer" containerID="c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.512457 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ms696" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.537573 4699 scope.go:117] "RemoveContainer" containerID="46aeb100bd5a41ee978076bb87423c17006a41bb58ca71c78f257f6951501401" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.571559 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ms696"] Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.571891 4699 scope.go:117] "RemoveContainer" containerID="57058290b00f5d6b590d5a4c78114725d2a482c376a97f7adb8c6ad1e53c213e" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.587856 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ms696"] Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.616856 4699 scope.go:117] "RemoveContainer" containerID="c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d" Feb 26 11:55:49 crc kubenswrapper[4699]: E0226 11:55:49.617593 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d\": container with ID starting with c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d not found: ID does not exist" containerID="c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.617644 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d"} err="failed to get container status \"c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d\": rpc error: code = NotFound desc = could not find container \"c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d\": container with ID starting with c98a95633db82982137dc140fdf6eaae665566024d6262406fab215329fcef6d not found: ID does not exist" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.617679 4699 scope.go:117] "RemoveContainer" containerID="46aeb100bd5a41ee978076bb87423c17006a41bb58ca71c78f257f6951501401" Feb 26 11:55:49 crc kubenswrapper[4699]: E0226 11:55:49.618032 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46aeb100bd5a41ee978076bb87423c17006a41bb58ca71c78f257f6951501401\": container with ID starting with 46aeb100bd5a41ee978076bb87423c17006a41bb58ca71c78f257f6951501401 not found: ID does not exist" containerID="46aeb100bd5a41ee978076bb87423c17006a41bb58ca71c78f257f6951501401" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.618052 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46aeb100bd5a41ee978076bb87423c17006a41bb58ca71c78f257f6951501401"} err="failed to get container status \"46aeb100bd5a41ee978076bb87423c17006a41bb58ca71c78f257f6951501401\": rpc error: code = NotFound desc = could not find container \"46aeb100bd5a41ee978076bb87423c17006a41bb58ca71c78f257f6951501401\": container with ID starting with 46aeb100bd5a41ee978076bb87423c17006a41bb58ca71c78f257f6951501401 not found: ID does not exist" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.618065 4699 scope.go:117] "RemoveContainer" containerID="57058290b00f5d6b590d5a4c78114725d2a482c376a97f7adb8c6ad1e53c213e" Feb 26 11:55:49 crc kubenswrapper[4699]: E0226 11:55:49.618579 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57058290b00f5d6b590d5a4c78114725d2a482c376a97f7adb8c6ad1e53c213e\": container with ID starting with 57058290b00f5d6b590d5a4c78114725d2a482c376a97f7adb8c6ad1e53c213e not found: ID does not exist" containerID="57058290b00f5d6b590d5a4c78114725d2a482c376a97f7adb8c6ad1e53c213e" Feb 26 11:55:49 crc kubenswrapper[4699]: I0226 11:55:49.618622 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57058290b00f5d6b590d5a4c78114725d2a482c376a97f7adb8c6ad1e53c213e"} err="failed to get container status \"57058290b00f5d6b590d5a4c78114725d2a482c376a97f7adb8c6ad1e53c213e\": rpc error: code = NotFound desc = could not find container \"57058290b00f5d6b590d5a4c78114725d2a482c376a97f7adb8c6ad1e53c213e\": container with ID starting with 57058290b00f5d6b590d5a4c78114725d2a482c376a97f7adb8c6ad1e53c213e not found: ID does not exist" Feb 26 11:55:50 crc kubenswrapper[4699]: I0226 11:55:50.270774 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" path="/var/lib/kubelet/pods/24a9d670-7b0f-45dc-ae64-5a2ef0c623ee/volumes" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.147980 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535116-fwrcj"] Feb 26 11:56:00 crc kubenswrapper[4699]: E0226 11:56:00.149311 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" containerName="extract-utilities" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.149334 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" containerName="extract-utilities" Feb 26 11:56:00 crc kubenswrapper[4699]: E0226 11:56:00.149365 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" containerName="registry-server" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.149377 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" containerName="registry-server" Feb 26 11:56:00 crc kubenswrapper[4699]: E0226 11:56:00.149406 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" containerName="extract-content" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.149414 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" containerName="extract-content" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.149675 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a9d670-7b0f-45dc-ae64-5a2ef0c623ee" containerName="registry-server" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.150496 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535116-fwrcj" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.153464 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.153541 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.153651 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.161755 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535116-fwrcj"] Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.242192 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzl4g\" (UniqueName: \"kubernetes.io/projected/f67852f2-cfab-4e51-b986-30f2a582877d-kube-api-access-kzl4g\") pod \"auto-csr-approver-29535116-fwrcj\" (UID: \"f67852f2-cfab-4e51-b986-30f2a582877d\") " pod="openshift-infra/auto-csr-approver-29535116-fwrcj" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.343814 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzl4g\" (UniqueName: \"kubernetes.io/projected/f67852f2-cfab-4e51-b986-30f2a582877d-kube-api-access-kzl4g\") pod \"auto-csr-approver-29535116-fwrcj\" (UID: \"f67852f2-cfab-4e51-b986-30f2a582877d\") " pod="openshift-infra/auto-csr-approver-29535116-fwrcj" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.369367 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzl4g\" (UniqueName: \"kubernetes.io/projected/f67852f2-cfab-4e51-b986-30f2a582877d-kube-api-access-kzl4g\") pod \"auto-csr-approver-29535116-fwrcj\" (UID: \"f67852f2-cfab-4e51-b986-30f2a582877d\") " pod="openshift-infra/auto-csr-approver-29535116-fwrcj" Feb 26 11:56:00 crc kubenswrapper[4699]: I0226 11:56:00.476489 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535116-fwrcj" Feb 26 11:56:01 crc kubenswrapper[4699]: I0226 11:56:01.493991 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535116-fwrcj"] Feb 26 11:56:01 crc kubenswrapper[4699]: I0226 11:56:01.637322 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535116-fwrcj" event={"ID":"f67852f2-cfab-4e51-b986-30f2a582877d","Type":"ContainerStarted","Data":"60489b00888faac90b3b6d74d0dee71d9d5747773017ec11625f44cd730a2ce5"} Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.117346 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2fx9d"] Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.120362 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.130454 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2fx9d"] Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.285693 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-catalog-content\") pod \"redhat-marketplace-2fx9d\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.285845 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74pvl\" (UniqueName: \"kubernetes.io/projected/b6d70b9c-e164-4128-9ed5-36526cbc378a-kube-api-access-74pvl\") pod \"redhat-marketplace-2fx9d\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.285945 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-utilities\") pod \"redhat-marketplace-2fx9d\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.387431 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-catalog-content\") pod \"redhat-marketplace-2fx9d\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.387958 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74pvl\" (UniqueName: \"kubernetes.io/projected/b6d70b9c-e164-4128-9ed5-36526cbc378a-kube-api-access-74pvl\") pod \"redhat-marketplace-2fx9d\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.388047 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-utilities\") pod \"redhat-marketplace-2fx9d\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.388628 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-utilities\") pod \"redhat-marketplace-2fx9d\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.389936 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-catalog-content\") pod \"redhat-marketplace-2fx9d\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.418144 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74pvl\" (UniqueName: \"kubernetes.io/projected/b6d70b9c-e164-4128-9ed5-36526cbc378a-kube-api-access-74pvl\") pod \"redhat-marketplace-2fx9d\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.449414 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:02 crc kubenswrapper[4699]: I0226 11:56:02.970923 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2fx9d"] Feb 26 11:56:02 crc kubenswrapper[4699]: W0226 11:56:02.983394 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6d70b9c_e164_4128_9ed5_36526cbc378a.slice/crio-7ef6d935dedc52e617caaf85457a1f25f23f860e59a7f5ec692c0136e824c8d6 WatchSource:0}: Error finding container 7ef6d935dedc52e617caaf85457a1f25f23f860e59a7f5ec692c0136e824c8d6: Status 404 returned error can't find the container with id 7ef6d935dedc52e617caaf85457a1f25f23f860e59a7f5ec692c0136e824c8d6 Feb 26 11:56:03 crc kubenswrapper[4699]: I0226 11:56:03.665049 4699 generic.go:334] "Generic (PLEG): container finished" podID="f67852f2-cfab-4e51-b986-30f2a582877d" containerID="97b5ef4eef61ea4aaf36ee8c050903fab28c7dee69a56263785c220e6a8c6292" exitCode=0 Feb 26 11:56:03 crc kubenswrapper[4699]: I0226 11:56:03.665193 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535116-fwrcj" event={"ID":"f67852f2-cfab-4e51-b986-30f2a582877d","Type":"ContainerDied","Data":"97b5ef4eef61ea4aaf36ee8c050903fab28c7dee69a56263785c220e6a8c6292"} Feb 26 11:56:03 crc kubenswrapper[4699]: I0226 11:56:03.669147 4699 generic.go:334] "Generic (PLEG): container finished" podID="b6d70b9c-e164-4128-9ed5-36526cbc378a" containerID="322fd2dcdda72d493e836e24ccab86fb0862ced48640c0c0225773af5aa27f1b" exitCode=0 Feb 26 11:56:03 crc kubenswrapper[4699]: I0226 11:56:03.669210 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2fx9d" event={"ID":"b6d70b9c-e164-4128-9ed5-36526cbc378a","Type":"ContainerDied","Data":"322fd2dcdda72d493e836e24ccab86fb0862ced48640c0c0225773af5aa27f1b"} Feb 26 11:56:03 crc kubenswrapper[4699]: I0226 11:56:03.669266 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2fx9d" event={"ID":"b6d70b9c-e164-4128-9ed5-36526cbc378a","Type":"ContainerStarted","Data":"7ef6d935dedc52e617caaf85457a1f25f23f860e59a7f5ec692c0136e824c8d6"} Feb 26 11:56:05 crc kubenswrapper[4699]: I0226 11:56:05.021400 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535116-fwrcj" Feb 26 11:56:05 crc kubenswrapper[4699]: I0226 11:56:05.150653 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzl4g\" (UniqueName: \"kubernetes.io/projected/f67852f2-cfab-4e51-b986-30f2a582877d-kube-api-access-kzl4g\") pod \"f67852f2-cfab-4e51-b986-30f2a582877d\" (UID: \"f67852f2-cfab-4e51-b986-30f2a582877d\") " Feb 26 11:56:05 crc kubenswrapper[4699]: I0226 11:56:05.158713 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67852f2-cfab-4e51-b986-30f2a582877d-kube-api-access-kzl4g" (OuterVolumeSpecName: "kube-api-access-kzl4g") pod "f67852f2-cfab-4e51-b986-30f2a582877d" (UID: "f67852f2-cfab-4e51-b986-30f2a582877d"). InnerVolumeSpecName "kube-api-access-kzl4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:56:05 crc kubenswrapper[4699]: I0226 11:56:05.253085 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzl4g\" (UniqueName: \"kubernetes.io/projected/f67852f2-cfab-4e51-b986-30f2a582877d-kube-api-access-kzl4g\") on node \"crc\" DevicePath \"\"" Feb 26 11:56:05 crc kubenswrapper[4699]: I0226 11:56:05.709744 4699 generic.go:334] "Generic (PLEG): container finished" podID="b6d70b9c-e164-4128-9ed5-36526cbc378a" containerID="b819d5a5eb8a18ead17e03fc7ee8635e05baef41d12dc5fde7391cfb14d92944" exitCode=0 Feb 26 11:56:05 crc kubenswrapper[4699]: I0226 11:56:05.709868 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2fx9d" event={"ID":"b6d70b9c-e164-4128-9ed5-36526cbc378a","Type":"ContainerDied","Data":"b819d5a5eb8a18ead17e03fc7ee8635e05baef41d12dc5fde7391cfb14d92944"} Feb 26 11:56:05 crc kubenswrapper[4699]: I0226 11:56:05.711771 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535116-fwrcj" event={"ID":"f67852f2-cfab-4e51-b986-30f2a582877d","Type":"ContainerDied","Data":"60489b00888faac90b3b6d74d0dee71d9d5747773017ec11625f44cd730a2ce5"} Feb 26 11:56:05 crc kubenswrapper[4699]: I0226 11:56:05.711812 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60489b00888faac90b3b6d74d0dee71d9d5747773017ec11625f44cd730a2ce5" Feb 26 11:56:05 crc kubenswrapper[4699]: I0226 11:56:05.711963 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535116-fwrcj" Feb 26 11:56:06 crc kubenswrapper[4699]: I0226 11:56:06.106261 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535110-g2n8d"] Feb 26 11:56:06 crc kubenswrapper[4699]: I0226 11:56:06.116373 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535110-g2n8d"] Feb 26 11:56:06 crc kubenswrapper[4699]: I0226 11:56:06.274221 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e270a2c1-b1c4-498d-9adf-a3cbb51defce" path="/var/lib/kubelet/pods/e270a2c1-b1c4-498d-9adf-a3cbb51defce/volumes" Feb 26 11:56:06 crc kubenswrapper[4699]: I0226 11:56:06.724638 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2fx9d" event={"ID":"b6d70b9c-e164-4128-9ed5-36526cbc378a","Type":"ContainerStarted","Data":"44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1"} Feb 26 11:56:06 crc kubenswrapper[4699]: I0226 11:56:06.748362 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2fx9d" podStartSLOduration=2.324149415 podStartE2EDuration="4.748343567s" podCreationTimestamp="2026-02-26 11:56:02 +0000 UTC" firstStartedPulling="2026-02-26 11:56:03.67297054 +0000 UTC m=+2709.483796974" lastFinishedPulling="2026-02-26 11:56:06.097164692 +0000 UTC m=+2711.907991126" observedRunningTime="2026-02-26 11:56:06.742922704 +0000 UTC m=+2712.553749158" watchObservedRunningTime="2026-02-26 11:56:06.748343567 +0000 UTC m=+2712.559170001" Feb 26 11:56:12 crc kubenswrapper[4699]: I0226 11:56:12.450006 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:12 crc kubenswrapper[4699]: I0226 11:56:12.450774 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:12 crc kubenswrapper[4699]: I0226 11:56:12.504033 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:12 crc kubenswrapper[4699]: I0226 11:56:12.823676 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:12 crc kubenswrapper[4699]: I0226 11:56:12.874661 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2fx9d"] Feb 26 11:56:14 crc kubenswrapper[4699]: I0226 11:56:14.795613 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2fx9d" podUID="b6d70b9c-e164-4128-9ed5-36526cbc378a" containerName="registry-server" containerID="cri-o://44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1" gracePeriod=2 Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.238559 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.371967 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-catalog-content\") pod \"b6d70b9c-e164-4128-9ed5-36526cbc378a\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.372292 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-utilities\") pod \"b6d70b9c-e164-4128-9ed5-36526cbc378a\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.373324 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-utilities" (OuterVolumeSpecName: "utilities") pod "b6d70b9c-e164-4128-9ed5-36526cbc378a" (UID: "b6d70b9c-e164-4128-9ed5-36526cbc378a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.373653 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74pvl\" (UniqueName: \"kubernetes.io/projected/b6d70b9c-e164-4128-9ed5-36526cbc378a-kube-api-access-74pvl\") pod \"b6d70b9c-e164-4128-9ed5-36526cbc378a\" (UID: \"b6d70b9c-e164-4128-9ed5-36526cbc378a\") " Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.375242 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.384497 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d70b9c-e164-4128-9ed5-36526cbc378a-kube-api-access-74pvl" (OuterVolumeSpecName: "kube-api-access-74pvl") pod "b6d70b9c-e164-4128-9ed5-36526cbc378a" (UID: "b6d70b9c-e164-4128-9ed5-36526cbc378a"). InnerVolumeSpecName "kube-api-access-74pvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.403944 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6d70b9c-e164-4128-9ed5-36526cbc378a" (UID: "b6d70b9c-e164-4128-9ed5-36526cbc378a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.477320 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74pvl\" (UniqueName: \"kubernetes.io/projected/b6d70b9c-e164-4128-9ed5-36526cbc378a-kube-api-access-74pvl\") on node \"crc\" DevicePath \"\"" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.477362 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d70b9c-e164-4128-9ed5-36526cbc378a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.824880 4699 generic.go:334] "Generic (PLEG): container finished" podID="b6d70b9c-e164-4128-9ed5-36526cbc378a" containerID="44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1" exitCode=0 Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.824951 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2fx9d" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.824969 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2fx9d" event={"ID":"b6d70b9c-e164-4128-9ed5-36526cbc378a","Type":"ContainerDied","Data":"44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1"} Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.826202 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2fx9d" event={"ID":"b6d70b9c-e164-4128-9ed5-36526cbc378a","Type":"ContainerDied","Data":"7ef6d935dedc52e617caaf85457a1f25f23f860e59a7f5ec692c0136e824c8d6"} Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.826243 4699 scope.go:117] "RemoveContainer" containerID="44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.861600 4699 scope.go:117] "RemoveContainer" containerID="b819d5a5eb8a18ead17e03fc7ee8635e05baef41d12dc5fde7391cfb14d92944" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.872395 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2fx9d"] Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.880923 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2fx9d"] Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.887526 4699 scope.go:117] "RemoveContainer" containerID="322fd2dcdda72d493e836e24ccab86fb0862ced48640c0c0225773af5aa27f1b" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.927456 4699 scope.go:117] "RemoveContainer" containerID="44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1" Feb 26 11:56:15 crc kubenswrapper[4699]: E0226 11:56:15.927991 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1\": container with ID starting with 44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1 not found: ID does not exist" containerID="44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.928099 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1"} err="failed to get container status \"44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1\": rpc error: code = NotFound desc = could not find container \"44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1\": container with ID starting with 44e48f2cdac118ffb00f481fbec6095f53c866c7375c9c1b41c179d0e7de3da1 not found: ID does not exist" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.928257 4699 scope.go:117] "RemoveContainer" containerID="b819d5a5eb8a18ead17e03fc7ee8635e05baef41d12dc5fde7391cfb14d92944" Feb 26 11:56:15 crc kubenswrapper[4699]: E0226 11:56:15.928796 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b819d5a5eb8a18ead17e03fc7ee8635e05baef41d12dc5fde7391cfb14d92944\": container with ID starting with b819d5a5eb8a18ead17e03fc7ee8635e05baef41d12dc5fde7391cfb14d92944 not found: ID does not exist" containerID="b819d5a5eb8a18ead17e03fc7ee8635e05baef41d12dc5fde7391cfb14d92944" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.928833 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b819d5a5eb8a18ead17e03fc7ee8635e05baef41d12dc5fde7391cfb14d92944"} err="failed to get container status \"b819d5a5eb8a18ead17e03fc7ee8635e05baef41d12dc5fde7391cfb14d92944\": rpc error: code = NotFound desc = could not find container \"b819d5a5eb8a18ead17e03fc7ee8635e05baef41d12dc5fde7391cfb14d92944\": container with ID starting with b819d5a5eb8a18ead17e03fc7ee8635e05baef41d12dc5fde7391cfb14d92944 not found: ID does not exist" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.928855 4699 scope.go:117] "RemoveContainer" containerID="322fd2dcdda72d493e836e24ccab86fb0862ced48640c0c0225773af5aa27f1b" Feb 26 11:56:15 crc kubenswrapper[4699]: E0226 11:56:15.929254 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"322fd2dcdda72d493e836e24ccab86fb0862ced48640c0c0225773af5aa27f1b\": container with ID starting with 322fd2dcdda72d493e836e24ccab86fb0862ced48640c0c0225773af5aa27f1b not found: ID does not exist" containerID="322fd2dcdda72d493e836e24ccab86fb0862ced48640c0c0225773af5aa27f1b" Feb 26 11:56:15 crc kubenswrapper[4699]: I0226 11:56:15.929353 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"322fd2dcdda72d493e836e24ccab86fb0862ced48640c0c0225773af5aa27f1b"} err="failed to get container status \"322fd2dcdda72d493e836e24ccab86fb0862ced48640c0c0225773af5aa27f1b\": rpc error: code = NotFound desc = could not find container \"322fd2dcdda72d493e836e24ccab86fb0862ced48640c0c0225773af5aa27f1b\": container with ID starting with 322fd2dcdda72d493e836e24ccab86fb0862ced48640c0c0225773af5aa27f1b not found: ID does not exist" Feb 26 11:56:16 crc kubenswrapper[4699]: I0226 11:56:16.272582 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d70b9c-e164-4128-9ed5-36526cbc378a" path="/var/lib/kubelet/pods/b6d70b9c-e164-4128-9ed5-36526cbc378a/volumes" Feb 26 11:56:31 crc kubenswrapper[4699]: I0226 11:56:31.856628 4699 scope.go:117] "RemoveContainer" containerID="5c3d5a2f0c08caa11b3efe5f7dadcab2f42f5d3eecfcc331eaac28aadfec2f57" Feb 26 11:56:41 crc kubenswrapper[4699]: I0226 11:56:41.584564 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:56:41 crc kubenswrapper[4699]: I0226 11:56:41.585358 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:57:04 crc kubenswrapper[4699]: I0226 11:57:04.251174 4699 generic.go:334] "Generic (PLEG): container finished" podID="08bdd16a-fc18-4262-9175-a05b613a76c9" containerID="43d0f761beaf929bd7b88c678a07a81fe65b54f961758754c84f435ce7b8d8cb" exitCode=0 Feb 26 11:57:04 crc kubenswrapper[4699]: I0226 11:57:04.251296 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" event={"ID":"08bdd16a-fc18-4262-9175-a05b613a76c9","Type":"ContainerDied","Data":"43d0f761beaf929bd7b88c678a07a81fe65b54f961758754c84f435ce7b8d8cb"} Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.685390 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.776043 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-1\") pod \"08bdd16a-fc18-4262-9175-a05b613a76c9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.805810 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "08bdd16a-fc18-4262-9175-a05b613a76c9" (UID: "08bdd16a-fc18-4262-9175-a05b613a76c9"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.877676 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-2\") pod \"08bdd16a-fc18-4262-9175-a05b613a76c9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.877762 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-telemetry-combined-ca-bundle\") pod \"08bdd16a-fc18-4262-9175-a05b613a76c9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.877924 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-inventory\") pod \"08bdd16a-fc18-4262-9175-a05b613a76c9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.878018 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ssh-key-openstack-edpm-ipam\") pod \"08bdd16a-fc18-4262-9175-a05b613a76c9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.878732 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjjd7\" (UniqueName: \"kubernetes.io/projected/08bdd16a-fc18-4262-9175-a05b613a76c9-kube-api-access-qjjd7\") pod \"08bdd16a-fc18-4262-9175-a05b613a76c9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.878756 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-0\") pod \"08bdd16a-fc18-4262-9175-a05b613a76c9\" (UID: \"08bdd16a-fc18-4262-9175-a05b613a76c9\") " Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.879416 4699 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.881918 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "08bdd16a-fc18-4262-9175-a05b613a76c9" (UID: "08bdd16a-fc18-4262-9175-a05b613a76c9"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.882547 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08bdd16a-fc18-4262-9175-a05b613a76c9-kube-api-access-qjjd7" (OuterVolumeSpecName: "kube-api-access-qjjd7") pod "08bdd16a-fc18-4262-9175-a05b613a76c9" (UID: "08bdd16a-fc18-4262-9175-a05b613a76c9"). InnerVolumeSpecName "kube-api-access-qjjd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.903964 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "08bdd16a-fc18-4262-9175-a05b613a76c9" (UID: "08bdd16a-fc18-4262-9175-a05b613a76c9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.905014 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-inventory" (OuterVolumeSpecName: "inventory") pod "08bdd16a-fc18-4262-9175-a05b613a76c9" (UID: "08bdd16a-fc18-4262-9175-a05b613a76c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.906809 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "08bdd16a-fc18-4262-9175-a05b613a76c9" (UID: "08bdd16a-fc18-4262-9175-a05b613a76c9"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.911780 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "08bdd16a-fc18-4262-9175-a05b613a76c9" (UID: "08bdd16a-fc18-4262-9175-a05b613a76c9"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.980829 4699 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.980871 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.980882 4699 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.980892 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjjd7\" (UniqueName: \"kubernetes.io/projected/08bdd16a-fc18-4262-9175-a05b613a76c9-kube-api-access-qjjd7\") on node \"crc\" DevicePath \"\"" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.980902 4699 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 26 11:57:05 crc kubenswrapper[4699]: I0226 11:57:05.980917 4699 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bdd16a-fc18-4262-9175-a05b613a76c9-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:57:06 crc kubenswrapper[4699]: I0226 11:57:06.275037 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" event={"ID":"08bdd16a-fc18-4262-9175-a05b613a76c9","Type":"ContainerDied","Data":"14466ace843fec23ce73560f034f012e5b9e5664261a1686721bb34a65e7ea16"} Feb 26 11:57:06 crc kubenswrapper[4699]: I0226 11:57:06.275074 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14466ace843fec23ce73560f034f012e5b9e5664261a1686721bb34a65e7ea16" Feb 26 11:57:06 crc kubenswrapper[4699]: I0226 11:57:06.275144 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9" Feb 26 11:57:11 crc kubenswrapper[4699]: I0226 11:57:11.584757 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:57:11 crc kubenswrapper[4699]: I0226 11:57:11.585550 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:57:41 crc kubenswrapper[4699]: I0226 11:57:41.585422 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:57:41 crc kubenswrapper[4699]: I0226 11:57:41.586075 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:57:41 crc kubenswrapper[4699]: I0226 11:57:41.586142 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 11:57:41 crc kubenswrapper[4699]: I0226 11:57:41.587024 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c58903dcb4a12909fcb0583a1d55149dc4c175867d594a32d984dde51ae536f"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:57:41 crc kubenswrapper[4699]: I0226 11:57:41.587091 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://4c58903dcb4a12909fcb0583a1d55149dc4c175867d594a32d984dde51ae536f" gracePeriod=600 Feb 26 11:57:41 crc kubenswrapper[4699]: I0226 11:57:41.730782 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="4c58903dcb4a12909fcb0583a1d55149dc4c175867d594a32d984dde51ae536f" exitCode=0 Feb 26 11:57:41 crc kubenswrapper[4699]: I0226 11:57:41.730833 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"4c58903dcb4a12909fcb0583a1d55149dc4c175867d594a32d984dde51ae536f"} Feb 26 11:57:41 crc kubenswrapper[4699]: I0226 11:57:41.730875 4699 scope.go:117] "RemoveContainer" containerID="c9ba98af8e384a46c220f982325ae50f5859e35e543bee096d657cca90e89dde" Feb 26 11:57:42 crc kubenswrapper[4699]: I0226 11:57:42.742405 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948"} Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.381533 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qrkbm"] Feb 26 11:57:43 crc kubenswrapper[4699]: E0226 11:57:43.382771 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67852f2-cfab-4e51-b986-30f2a582877d" containerName="oc" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.382792 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67852f2-cfab-4e51-b986-30f2a582877d" containerName="oc" Feb 26 11:57:43 crc kubenswrapper[4699]: E0226 11:57:43.382809 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d70b9c-e164-4128-9ed5-36526cbc378a" containerName="registry-server" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.382816 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d70b9c-e164-4128-9ed5-36526cbc378a" containerName="registry-server" Feb 26 11:57:43 crc kubenswrapper[4699]: E0226 11:57:43.382835 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d70b9c-e164-4128-9ed5-36526cbc378a" containerName="extract-content" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.382842 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d70b9c-e164-4128-9ed5-36526cbc378a" containerName="extract-content" Feb 26 11:57:43 crc kubenswrapper[4699]: E0226 11:57:43.382860 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bdd16a-fc18-4262-9175-a05b613a76c9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.382867 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bdd16a-fc18-4262-9175-a05b613a76c9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 11:57:43 crc kubenswrapper[4699]: E0226 11:57:43.382876 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d70b9c-e164-4128-9ed5-36526cbc378a" containerName="extract-utilities" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.382882 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d70b9c-e164-4128-9ed5-36526cbc378a" containerName="extract-utilities" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.383086 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="08bdd16a-fc18-4262-9175-a05b613a76c9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.383109 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d70b9c-e164-4128-9ed5-36526cbc378a" containerName="registry-server" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.383142 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67852f2-cfab-4e51-b986-30f2a582877d" containerName="oc" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.384505 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.395394 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrkbm"] Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.397910 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-catalog-content\") pod \"redhat-operators-qrkbm\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.397966 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-utilities\") pod \"redhat-operators-qrkbm\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.398020 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grjbk\" (UniqueName: \"kubernetes.io/projected/9631f0a5-2f36-4dc0-a473-38fe1d97215d-kube-api-access-grjbk\") pod \"redhat-operators-qrkbm\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.500000 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-catalog-content\") pod \"redhat-operators-qrkbm\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.500082 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-utilities\") pod \"redhat-operators-qrkbm\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.500170 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grjbk\" (UniqueName: \"kubernetes.io/projected/9631f0a5-2f36-4dc0-a473-38fe1d97215d-kube-api-access-grjbk\") pod \"redhat-operators-qrkbm\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.500574 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-catalog-content\") pod \"redhat-operators-qrkbm\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.500674 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-utilities\") pod \"redhat-operators-qrkbm\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.524854 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grjbk\" (UniqueName: \"kubernetes.io/projected/9631f0a5-2f36-4dc0-a473-38fe1d97215d-kube-api-access-grjbk\") pod \"redhat-operators-qrkbm\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:43 crc kubenswrapper[4699]: I0226 11:57:43.706289 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:44 crc kubenswrapper[4699]: I0226 11:57:44.215765 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrkbm"] Feb 26 11:57:44 crc kubenswrapper[4699]: W0226 11:57:44.218315 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9631f0a5_2f36_4dc0_a473_38fe1d97215d.slice/crio-bf3ecd22b85cbf24c93a2ad16cffc894ab5fa7f0bb05ff2228704a19c193be6a WatchSource:0}: Error finding container bf3ecd22b85cbf24c93a2ad16cffc894ab5fa7f0bb05ff2228704a19c193be6a: Status 404 returned error can't find the container with id bf3ecd22b85cbf24c93a2ad16cffc894ab5fa7f0bb05ff2228704a19c193be6a Feb 26 11:57:44 crc kubenswrapper[4699]: I0226 11:57:44.764237 4699 generic.go:334] "Generic (PLEG): container finished" podID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerID="3d779da817a1ce4610ae811406c88b435f18d42b5fe4797bebcfec8947827eba" exitCode=0 Feb 26 11:57:44 crc kubenswrapper[4699]: I0226 11:57:44.764639 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrkbm" event={"ID":"9631f0a5-2f36-4dc0-a473-38fe1d97215d","Type":"ContainerDied","Data":"3d779da817a1ce4610ae811406c88b435f18d42b5fe4797bebcfec8947827eba"} Feb 26 11:57:44 crc kubenswrapper[4699]: I0226 11:57:44.764663 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrkbm" event={"ID":"9631f0a5-2f36-4dc0-a473-38fe1d97215d","Type":"ContainerStarted","Data":"bf3ecd22b85cbf24c93a2ad16cffc894ab5fa7f0bb05ff2228704a19c193be6a"} Feb 26 11:57:44 crc kubenswrapper[4699]: I0226 11:57:44.768033 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:57:47 crc kubenswrapper[4699]: I0226 11:57:47.796312 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrkbm" event={"ID":"9631f0a5-2f36-4dc0-a473-38fe1d97215d","Type":"ContainerStarted","Data":"53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575"} Feb 26 11:57:50 crc kubenswrapper[4699]: I0226 11:57:50.829812 4699 generic.go:334] "Generic (PLEG): container finished" podID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerID="53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575" exitCode=0 Feb 26 11:57:50 crc kubenswrapper[4699]: I0226 11:57:50.829904 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrkbm" event={"ID":"9631f0a5-2f36-4dc0-a473-38fe1d97215d","Type":"ContainerDied","Data":"53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575"} Feb 26 11:57:52 crc kubenswrapper[4699]: I0226 11:57:52.851535 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrkbm" event={"ID":"9631f0a5-2f36-4dc0-a473-38fe1d97215d","Type":"ContainerStarted","Data":"4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc"} Feb 26 11:57:52 crc kubenswrapper[4699]: I0226 11:57:52.870678 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qrkbm" podStartSLOduration=2.535592525 podStartE2EDuration="9.870661088s" podCreationTimestamp="2026-02-26 11:57:43 +0000 UTC" firstStartedPulling="2026-02-26 11:57:44.767289204 +0000 UTC m=+2810.578115648" lastFinishedPulling="2026-02-26 11:57:52.102357777 +0000 UTC m=+2817.913184211" observedRunningTime="2026-02-26 11:57:52.86833041 +0000 UTC m=+2818.679156864" watchObservedRunningTime="2026-02-26 11:57:52.870661088 +0000 UTC m=+2818.681487512" Feb 26 11:57:53 crc kubenswrapper[4699]: I0226 11:57:53.706490 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:53 crc kubenswrapper[4699]: I0226 11:57:53.706554 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:57:54 crc kubenswrapper[4699]: I0226 11:57:54.770101 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrkbm" podUID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerName="registry-server" probeResult="failure" output=< Feb 26 11:57:54 crc kubenswrapper[4699]: timeout: failed to connect service ":50051" within 1s Feb 26 11:57:54 crc kubenswrapper[4699]: > Feb 26 11:58:00 crc kubenswrapper[4699]: I0226 11:58:00.148859 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535118-n92bn"] Feb 26 11:58:00 crc kubenswrapper[4699]: I0226 11:58:00.152193 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535118-n92bn" Feb 26 11:58:00 crc kubenswrapper[4699]: I0226 11:58:00.155180 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 11:58:00 crc kubenswrapper[4699]: I0226 11:58:00.155243 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:58:00 crc kubenswrapper[4699]: I0226 11:58:00.155287 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:58:00 crc kubenswrapper[4699]: I0226 11:58:00.162757 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535118-n92bn"] Feb 26 11:58:00 crc kubenswrapper[4699]: I0226 11:58:00.276771 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sll54\" (UniqueName: \"kubernetes.io/projected/6bde379e-7dd7-4b4b-bc25-b83d0174b100-kube-api-access-sll54\") pod \"auto-csr-approver-29535118-n92bn\" (UID: \"6bde379e-7dd7-4b4b-bc25-b83d0174b100\") " pod="openshift-infra/auto-csr-approver-29535118-n92bn" Feb 26 11:58:00 crc kubenswrapper[4699]: I0226 11:58:00.378267 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sll54\" (UniqueName: \"kubernetes.io/projected/6bde379e-7dd7-4b4b-bc25-b83d0174b100-kube-api-access-sll54\") pod \"auto-csr-approver-29535118-n92bn\" (UID: \"6bde379e-7dd7-4b4b-bc25-b83d0174b100\") " pod="openshift-infra/auto-csr-approver-29535118-n92bn" Feb 26 11:58:00 crc kubenswrapper[4699]: I0226 11:58:00.409357 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sll54\" (UniqueName: \"kubernetes.io/projected/6bde379e-7dd7-4b4b-bc25-b83d0174b100-kube-api-access-sll54\") pod \"auto-csr-approver-29535118-n92bn\" (UID: \"6bde379e-7dd7-4b4b-bc25-b83d0174b100\") " pod="openshift-infra/auto-csr-approver-29535118-n92bn" Feb 26 11:58:00 crc kubenswrapper[4699]: I0226 11:58:00.482740 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535118-n92bn" Feb 26 11:58:01 crc kubenswrapper[4699]: I0226 11:58:01.016231 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535118-n92bn"] Feb 26 11:58:01 crc kubenswrapper[4699]: W0226 11:58:01.027679 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bde379e_7dd7_4b4b_bc25_b83d0174b100.slice/crio-91099037474b511010fb5109b00e16bf53ed98c580d5b518a258c4893ae89cc4 WatchSource:0}: Error finding container 91099037474b511010fb5109b00e16bf53ed98c580d5b518a258c4893ae89cc4: Status 404 returned error can't find the container with id 91099037474b511010fb5109b00e16bf53ed98c580d5b518a258c4893ae89cc4 Feb 26 11:58:01 crc kubenswrapper[4699]: I0226 11:58:01.930984 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535118-n92bn" event={"ID":"6bde379e-7dd7-4b4b-bc25-b83d0174b100","Type":"ContainerStarted","Data":"91099037474b511010fb5109b00e16bf53ed98c580d5b518a258c4893ae89cc4"} Feb 26 11:58:02 crc kubenswrapper[4699]: I0226 11:58:02.940498 4699 generic.go:334] "Generic (PLEG): container finished" podID="6bde379e-7dd7-4b4b-bc25-b83d0174b100" containerID="7ee1327d152002290262452d2af09136d94e1e411a1eeb32531cce9b1d48c20c" exitCode=0 Feb 26 11:58:02 crc kubenswrapper[4699]: I0226 11:58:02.940554 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535118-n92bn" event={"ID":"6bde379e-7dd7-4b4b-bc25-b83d0174b100","Type":"ContainerDied","Data":"7ee1327d152002290262452d2af09136d94e1e411a1eeb32531cce9b1d48c20c"} Feb 26 11:58:03 crc kubenswrapper[4699]: I0226 11:58:03.753948 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:58:03 crc kubenswrapper[4699]: I0226 11:58:03.802429 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:58:03 crc kubenswrapper[4699]: I0226 11:58:03.991278 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrkbm"] Feb 26 11:58:04 crc kubenswrapper[4699]: I0226 11:58:04.343659 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535118-n92bn" Feb 26 11:58:04 crc kubenswrapper[4699]: I0226 11:58:04.484142 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sll54\" (UniqueName: \"kubernetes.io/projected/6bde379e-7dd7-4b4b-bc25-b83d0174b100-kube-api-access-sll54\") pod \"6bde379e-7dd7-4b4b-bc25-b83d0174b100\" (UID: \"6bde379e-7dd7-4b4b-bc25-b83d0174b100\") " Feb 26 11:58:04 crc kubenswrapper[4699]: I0226 11:58:04.493488 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bde379e-7dd7-4b4b-bc25-b83d0174b100-kube-api-access-sll54" (OuterVolumeSpecName: "kube-api-access-sll54") pod "6bde379e-7dd7-4b4b-bc25-b83d0174b100" (UID: "6bde379e-7dd7-4b4b-bc25-b83d0174b100"). InnerVolumeSpecName "kube-api-access-sll54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:58:04 crc kubenswrapper[4699]: I0226 11:58:04.586376 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sll54\" (UniqueName: \"kubernetes.io/projected/6bde379e-7dd7-4b4b-bc25-b83d0174b100-kube-api-access-sll54\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:04 crc kubenswrapper[4699]: I0226 11:58:04.959744 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535118-n92bn" Feb 26 11:58:04 crc kubenswrapper[4699]: I0226 11:58:04.959757 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535118-n92bn" event={"ID":"6bde379e-7dd7-4b4b-bc25-b83d0174b100","Type":"ContainerDied","Data":"91099037474b511010fb5109b00e16bf53ed98c580d5b518a258c4893ae89cc4"} Feb 26 11:58:04 crc kubenswrapper[4699]: I0226 11:58:04.959799 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91099037474b511010fb5109b00e16bf53ed98c580d5b518a258c4893ae89cc4" Feb 26 11:58:04 crc kubenswrapper[4699]: I0226 11:58:04.960030 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qrkbm" podUID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerName="registry-server" containerID="cri-o://4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc" gracePeriod=2 Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.426357 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535112-jg5nd"] Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.437725 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535112-jg5nd"] Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.528862 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.714364 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-catalog-content\") pod \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.716076 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-utilities\") pod \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.716211 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grjbk\" (UniqueName: \"kubernetes.io/projected/9631f0a5-2f36-4dc0-a473-38fe1d97215d-kube-api-access-grjbk\") pod \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\" (UID: \"9631f0a5-2f36-4dc0-a473-38fe1d97215d\") " Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.717070 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-utilities" (OuterVolumeSpecName: "utilities") pod "9631f0a5-2f36-4dc0-a473-38fe1d97215d" (UID: "9631f0a5-2f36-4dc0-a473-38fe1d97215d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.728999 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9631f0a5-2f36-4dc0-a473-38fe1d97215d-kube-api-access-grjbk" (OuterVolumeSpecName: "kube-api-access-grjbk") pod "9631f0a5-2f36-4dc0-a473-38fe1d97215d" (UID: "9631f0a5-2f36-4dc0-a473-38fe1d97215d"). InnerVolumeSpecName "kube-api-access-grjbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.818812 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.818847 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grjbk\" (UniqueName: \"kubernetes.io/projected/9631f0a5-2f36-4dc0-a473-38fe1d97215d-kube-api-access-grjbk\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.835272 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9631f0a5-2f36-4dc0-a473-38fe1d97215d" (UID: "9631f0a5-2f36-4dc0-a473-38fe1d97215d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.920244 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9631f0a5-2f36-4dc0-a473-38fe1d97215d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.970655 4699 generic.go:334] "Generic (PLEG): container finished" podID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerID="4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc" exitCode=0 Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.970721 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrkbm" Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.970725 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrkbm" event={"ID":"9631f0a5-2f36-4dc0-a473-38fe1d97215d","Type":"ContainerDied","Data":"4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc"} Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.970838 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrkbm" event={"ID":"9631f0a5-2f36-4dc0-a473-38fe1d97215d","Type":"ContainerDied","Data":"bf3ecd22b85cbf24c93a2ad16cffc894ab5fa7f0bb05ff2228704a19c193be6a"} Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.970858 4699 scope.go:117] "RemoveContainer" containerID="4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc" Feb 26 11:58:05 crc kubenswrapper[4699]: I0226 11:58:05.998596 4699 scope.go:117] "RemoveContainer" containerID="53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.010406 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrkbm"] Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.018939 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qrkbm"] Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.041435 4699 scope.go:117] "RemoveContainer" containerID="3d779da817a1ce4610ae811406c88b435f18d42b5fe4797bebcfec8947827eba" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.063088 4699 scope.go:117] "RemoveContainer" containerID="4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc" Feb 26 11:58:06 crc kubenswrapper[4699]: E0226 11:58:06.063671 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc\": container with ID starting with 4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc not found: ID does not exist" containerID="4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.063707 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc"} err="failed to get container status \"4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc\": rpc error: code = NotFound desc = could not find container \"4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc\": container with ID starting with 4c7e57aede1fc2c2d0c835fac1c21b609cc5544169e8bce6fd32a979b79f6ebc not found: ID does not exist" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.063729 4699 scope.go:117] "RemoveContainer" containerID="53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575" Feb 26 11:58:06 crc kubenswrapper[4699]: E0226 11:58:06.064051 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575\": container with ID starting with 53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575 not found: ID does not exist" containerID="53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.064077 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575"} err="failed to get container status \"53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575\": rpc error: code = NotFound desc = could not find container \"53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575\": container with ID starting with 53f520b4d9fd9ed3058f67f634d54cf5c2b1235187b713b704194c6029177575 not found: ID does not exist" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.064097 4699 scope.go:117] "RemoveContainer" containerID="3d779da817a1ce4610ae811406c88b435f18d42b5fe4797bebcfec8947827eba" Feb 26 11:58:06 crc kubenswrapper[4699]: E0226 11:58:06.064475 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d779da817a1ce4610ae811406c88b435f18d42b5fe4797bebcfec8947827eba\": container with ID starting with 3d779da817a1ce4610ae811406c88b435f18d42b5fe4797bebcfec8947827eba not found: ID does not exist" containerID="3d779da817a1ce4610ae811406c88b435f18d42b5fe4797bebcfec8947827eba" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.064503 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d779da817a1ce4610ae811406c88b435f18d42b5fe4797bebcfec8947827eba"} err="failed to get container status \"3d779da817a1ce4610ae811406c88b435f18d42b5fe4797bebcfec8947827eba\": rpc error: code = NotFound desc = could not find container \"3d779da817a1ce4610ae811406c88b435f18d42b5fe4797bebcfec8947827eba\": container with ID starting with 3d779da817a1ce4610ae811406c88b435f18d42b5fe4797bebcfec8947827eba not found: ID does not exist" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.270501 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ae8ab95-85cc-473a-bbe7-6065a75e5720" path="/var/lib/kubelet/pods/5ae8ab95-85cc-473a-bbe7-6065a75e5720/volumes" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.271296 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" path="/var/lib/kubelet/pods/9631f0a5-2f36-4dc0-a473-38fe1d97215d/volumes" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.613678 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 11:58:06 crc kubenswrapper[4699]: E0226 11:58:06.614091 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerName="extract-utilities" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.614105 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerName="extract-utilities" Feb 26 11:58:06 crc kubenswrapper[4699]: E0226 11:58:06.614130 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerName="registry-server" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.614141 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerName="registry-server" Feb 26 11:58:06 crc kubenswrapper[4699]: E0226 11:58:06.614166 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bde379e-7dd7-4b4b-bc25-b83d0174b100" containerName="oc" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.614174 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bde379e-7dd7-4b4b-bc25-b83d0174b100" containerName="oc" Feb 26 11:58:06 crc kubenswrapper[4699]: E0226 11:58:06.614189 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerName="extract-content" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.614195 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerName="extract-content" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.614382 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="9631f0a5-2f36-4dc0-a473-38fe1d97215d" containerName="registry-server" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.614398 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bde379e-7dd7-4b4b-bc25-b83d0174b100" containerName="oc" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.615231 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.618188 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fmwlb" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.618440 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.618506 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.618442 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.636511 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.734506 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.734558 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.734597 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.734640 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-config-data\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.734688 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.734727 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw89z\" (UniqueName: \"kubernetes.io/projected/19e02200-91be-49f8-8174-4a0bf6cda9dd-kube-api-access-qw89z\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.734763 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.734787 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.734996 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.836861 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.836961 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.837050 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.837080 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.837100 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.837137 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-config-data\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.837161 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.837186 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw89z\" (UniqueName: \"kubernetes.io/projected/19e02200-91be-49f8-8174-4a0bf6cda9dd-kube-api-access-qw89z\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.837217 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.837610 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.837887 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.837975 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.838401 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.840274 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-config-data\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.842721 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.842893 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.846652 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.858730 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw89z\" (UniqueName: \"kubernetes.io/projected/19e02200-91be-49f8-8174-4a0bf6cda9dd-kube-api-access-qw89z\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.877089 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " pod="openstack/tempest-tests-tempest" Feb 26 11:58:06 crc kubenswrapper[4699]: I0226 11:58:06.937469 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 11:58:07 crc kubenswrapper[4699]: I0226 11:58:07.381636 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 11:58:08 crc kubenswrapper[4699]: I0226 11:58:08.333829 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"19e02200-91be-49f8-8174-4a0bf6cda9dd","Type":"ContainerStarted","Data":"0f178f25ec5476c2b73a67092a0049cc1be8c1984e676d6f03c82e6dac970a0f"} Feb 26 11:58:31 crc kubenswrapper[4699]: I0226 11:58:31.961085 4699 scope.go:117] "RemoveContainer" containerID="e246a9fcedf1306ea4a405c16944f8ad4f9cf630b0ec81a4cd3160f4b051a918" Feb 26 11:58:38 crc kubenswrapper[4699]: E0226 11:58:38.991011 4699 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 26 11:58:38 crc kubenswrapper[4699]: E0226 11:58:38.991950 4699 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qw89z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(19e02200-91be-49f8-8174-4a0bf6cda9dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:58:38 crc kubenswrapper[4699]: E0226 11:58:38.993229 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="19e02200-91be-49f8-8174-4a0bf6cda9dd" Feb 26 11:58:39 crc kubenswrapper[4699]: E0226 11:58:39.638075 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="19e02200-91be-49f8-8174-4a0bf6cda9dd" Feb 26 11:58:55 crc kubenswrapper[4699]: I0226 11:58:55.781989 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"19e02200-91be-49f8-8174-4a0bf6cda9dd","Type":"ContainerStarted","Data":"084f210d6c46d1c100bf0bcfdc7ffd17238944ee1beffdf271d0e8035c249561"} Feb 26 11:58:55 crc kubenswrapper[4699]: I0226 11:58:55.799343 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.503911135 podStartE2EDuration="50.799327305s" podCreationTimestamp="2026-02-26 11:58:05 +0000 UTC" firstStartedPulling="2026-02-26 11:58:07.386191333 +0000 UTC m=+2833.197017767" lastFinishedPulling="2026-02-26 11:58:53.681607513 +0000 UTC m=+2879.492433937" observedRunningTime="2026-02-26 11:58:55.797550864 +0000 UTC m=+2881.608377308" watchObservedRunningTime="2026-02-26 11:58:55.799327305 +0000 UTC m=+2881.610153739" Feb 26 11:59:41 crc kubenswrapper[4699]: I0226 11:59:41.584769 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:59:41 crc kubenswrapper[4699]: I0226 11:59:41.585612 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.153432 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg"] Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.155679 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.158900 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.159154 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.164511 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535120-xkftf"] Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.166469 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535120-xkftf" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.171998 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.172208 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.172233 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.174571 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535120-xkftf"] Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.183686 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg"] Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.325638 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26ea785f-e6f4-487c-9c19-f7bff53a2a12-config-volume\") pod \"collect-profiles-29535120-q8qcg\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.325692 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs9v4\" (UniqueName: \"kubernetes.io/projected/062171a4-9cf3-460e-822d-2dc7b5baaf9b-kube-api-access-rs9v4\") pod \"auto-csr-approver-29535120-xkftf\" (UID: \"062171a4-9cf3-460e-822d-2dc7b5baaf9b\") " pod="openshift-infra/auto-csr-approver-29535120-xkftf" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.325986 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbhdd\" (UniqueName: \"kubernetes.io/projected/26ea785f-e6f4-487c-9c19-f7bff53a2a12-kube-api-access-fbhdd\") pod \"collect-profiles-29535120-q8qcg\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.326219 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26ea785f-e6f4-487c-9c19-f7bff53a2a12-secret-volume\") pod \"collect-profiles-29535120-q8qcg\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.428652 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26ea785f-e6f4-487c-9c19-f7bff53a2a12-secret-volume\") pod \"collect-profiles-29535120-q8qcg\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.429203 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26ea785f-e6f4-487c-9c19-f7bff53a2a12-config-volume\") pod \"collect-profiles-29535120-q8qcg\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.429246 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs9v4\" (UniqueName: \"kubernetes.io/projected/062171a4-9cf3-460e-822d-2dc7b5baaf9b-kube-api-access-rs9v4\") pod \"auto-csr-approver-29535120-xkftf\" (UID: \"062171a4-9cf3-460e-822d-2dc7b5baaf9b\") " pod="openshift-infra/auto-csr-approver-29535120-xkftf" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.429380 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbhdd\" (UniqueName: \"kubernetes.io/projected/26ea785f-e6f4-487c-9c19-f7bff53a2a12-kube-api-access-fbhdd\") pod \"collect-profiles-29535120-q8qcg\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.429990 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26ea785f-e6f4-487c-9c19-f7bff53a2a12-config-volume\") pod \"collect-profiles-29535120-q8qcg\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.439984 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26ea785f-e6f4-487c-9c19-f7bff53a2a12-secret-volume\") pod \"collect-profiles-29535120-q8qcg\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.444852 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs9v4\" (UniqueName: \"kubernetes.io/projected/062171a4-9cf3-460e-822d-2dc7b5baaf9b-kube-api-access-rs9v4\") pod \"auto-csr-approver-29535120-xkftf\" (UID: \"062171a4-9cf3-460e-822d-2dc7b5baaf9b\") " pod="openshift-infra/auto-csr-approver-29535120-xkftf" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.447811 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbhdd\" (UniqueName: \"kubernetes.io/projected/26ea785f-e6f4-487c-9c19-f7bff53a2a12-kube-api-access-fbhdd\") pod \"collect-profiles-29535120-q8qcg\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.488942 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:00 crc kubenswrapper[4699]: I0226 12:00:00.496322 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535120-xkftf" Feb 26 12:00:01 crc kubenswrapper[4699]: I0226 12:00:01.007930 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535120-xkftf"] Feb 26 12:00:01 crc kubenswrapper[4699]: I0226 12:00:01.018490 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg"] Feb 26 12:00:01 crc kubenswrapper[4699]: I0226 12:00:01.768295 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535120-xkftf" event={"ID":"062171a4-9cf3-460e-822d-2dc7b5baaf9b","Type":"ContainerStarted","Data":"c26d871505780476f7154f1136c24b88741942dcf05b640336ae294176e2c781"} Feb 26 12:00:01 crc kubenswrapper[4699]: I0226 12:00:01.770770 4699 generic.go:334] "Generic (PLEG): container finished" podID="26ea785f-e6f4-487c-9c19-f7bff53a2a12" containerID="304eeefd135ff84fa620ff0aadd68b2912afb8f7d23f40cde4711f81278e81fe" exitCode=0 Feb 26 12:00:01 crc kubenswrapper[4699]: I0226 12:00:01.770834 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" event={"ID":"26ea785f-e6f4-487c-9c19-f7bff53a2a12","Type":"ContainerDied","Data":"304eeefd135ff84fa620ff0aadd68b2912afb8f7d23f40cde4711f81278e81fe"} Feb 26 12:00:01 crc kubenswrapper[4699]: I0226 12:00:01.770895 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" event={"ID":"26ea785f-e6f4-487c-9c19-f7bff53a2a12","Type":"ContainerStarted","Data":"8d63703d2ccf60e02ccc5992cdbb35dabbc475b53bce0c58d259ee68fd5667ef"} Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.165251 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.284210 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26ea785f-e6f4-487c-9c19-f7bff53a2a12-secret-volume\") pod \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.284467 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26ea785f-e6f4-487c-9c19-f7bff53a2a12-config-volume\") pod \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.284491 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbhdd\" (UniqueName: \"kubernetes.io/projected/26ea785f-e6f4-487c-9c19-f7bff53a2a12-kube-api-access-fbhdd\") pod \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\" (UID: \"26ea785f-e6f4-487c-9c19-f7bff53a2a12\") " Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.285377 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ea785f-e6f4-487c-9c19-f7bff53a2a12-config-volume" (OuterVolumeSpecName: "config-volume") pod "26ea785f-e6f4-487c-9c19-f7bff53a2a12" (UID: "26ea785f-e6f4-487c-9c19-f7bff53a2a12"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.290898 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ea785f-e6f4-487c-9c19-f7bff53a2a12-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "26ea785f-e6f4-487c-9c19-f7bff53a2a12" (UID: "26ea785f-e6f4-487c-9c19-f7bff53a2a12"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.290937 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ea785f-e6f4-487c-9c19-f7bff53a2a12-kube-api-access-fbhdd" (OuterVolumeSpecName: "kube-api-access-fbhdd") pod "26ea785f-e6f4-487c-9c19-f7bff53a2a12" (UID: "26ea785f-e6f4-487c-9c19-f7bff53a2a12"). InnerVolumeSpecName "kube-api-access-fbhdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.387775 4699 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26ea785f-e6f4-487c-9c19-f7bff53a2a12-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.387980 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbhdd\" (UniqueName: \"kubernetes.io/projected/26ea785f-e6f4-487c-9c19-f7bff53a2a12-kube-api-access-fbhdd\") on node \"crc\" DevicePath \"\"" Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.388070 4699 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26ea785f-e6f4-487c-9c19-f7bff53a2a12-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.791181 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" event={"ID":"26ea785f-e6f4-487c-9c19-f7bff53a2a12","Type":"ContainerDied","Data":"8d63703d2ccf60e02ccc5992cdbb35dabbc475b53bce0c58d259ee68fd5667ef"} Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.791227 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d63703d2ccf60e02ccc5992cdbb35dabbc475b53bce0c58d259ee68fd5667ef" Feb 26 12:00:03 crc kubenswrapper[4699]: I0226 12:00:03.791262 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-q8qcg" Feb 26 12:00:04 crc kubenswrapper[4699]: I0226 12:00:04.254831 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4"] Feb 26 12:00:04 crc kubenswrapper[4699]: I0226 12:00:04.273276 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535075-hl4g4"] Feb 26 12:00:06 crc kubenswrapper[4699]: I0226 12:00:06.321360 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8aec36-74ad-4c69-baf8-d672010495e9" path="/var/lib/kubelet/pods/ed8aec36-74ad-4c69-baf8-d672010495e9/volumes" Feb 26 12:00:06 crc kubenswrapper[4699]: I0226 12:00:06.819956 4699 generic.go:334] "Generic (PLEG): container finished" podID="062171a4-9cf3-460e-822d-2dc7b5baaf9b" containerID="133bac2de294eabd3d63693bc2552e8927f3fa0a60ee9ff7dd1f74c8eac8b98e" exitCode=0 Feb 26 12:00:06 crc kubenswrapper[4699]: I0226 12:00:06.820024 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535120-xkftf" event={"ID":"062171a4-9cf3-460e-822d-2dc7b5baaf9b","Type":"ContainerDied","Data":"133bac2de294eabd3d63693bc2552e8927f3fa0a60ee9ff7dd1f74c8eac8b98e"} Feb 26 12:00:08 crc kubenswrapper[4699]: I0226 12:00:08.444577 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535120-xkftf" Feb 26 12:00:08 crc kubenswrapper[4699]: I0226 12:00:08.561669 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs9v4\" (UniqueName: \"kubernetes.io/projected/062171a4-9cf3-460e-822d-2dc7b5baaf9b-kube-api-access-rs9v4\") pod \"062171a4-9cf3-460e-822d-2dc7b5baaf9b\" (UID: \"062171a4-9cf3-460e-822d-2dc7b5baaf9b\") " Feb 26 12:00:08 crc kubenswrapper[4699]: I0226 12:00:08.569419 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062171a4-9cf3-460e-822d-2dc7b5baaf9b-kube-api-access-rs9v4" (OuterVolumeSpecName: "kube-api-access-rs9v4") pod "062171a4-9cf3-460e-822d-2dc7b5baaf9b" (UID: "062171a4-9cf3-460e-822d-2dc7b5baaf9b"). InnerVolumeSpecName "kube-api-access-rs9v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:00:08 crc kubenswrapper[4699]: I0226 12:00:08.665016 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs9v4\" (UniqueName: \"kubernetes.io/projected/062171a4-9cf3-460e-822d-2dc7b5baaf9b-kube-api-access-rs9v4\") on node \"crc\" DevicePath \"\"" Feb 26 12:00:08 crc kubenswrapper[4699]: I0226 12:00:08.839687 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535120-xkftf" event={"ID":"062171a4-9cf3-460e-822d-2dc7b5baaf9b","Type":"ContainerDied","Data":"c26d871505780476f7154f1136c24b88741942dcf05b640336ae294176e2c781"} Feb 26 12:00:08 crc kubenswrapper[4699]: I0226 12:00:08.839715 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535120-xkftf" Feb 26 12:00:08 crc kubenswrapper[4699]: I0226 12:00:08.839728 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c26d871505780476f7154f1136c24b88741942dcf05b640336ae294176e2c781" Feb 26 12:00:09 crc kubenswrapper[4699]: I0226 12:00:09.503009 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535114-zp6df"] Feb 26 12:00:09 crc kubenswrapper[4699]: I0226 12:00:09.511445 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535114-zp6df"] Feb 26 12:00:10 crc kubenswrapper[4699]: I0226 12:00:10.273598 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8acba14-233d-44a8-98b6-93df64a45300" path="/var/lib/kubelet/pods/c8acba14-233d-44a8-98b6-93df64a45300/volumes" Feb 26 12:00:11 crc kubenswrapper[4699]: I0226 12:00:11.584861 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:00:11 crc kubenswrapper[4699]: I0226 12:00:11.586286 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:00:38 crc kubenswrapper[4699]: I0226 12:00:38.948709 4699 scope.go:117] "RemoveContainer" containerID="1a649c81866f7635a569ca368b86ef4aadb641a91575dd77e87694a700822950" Feb 26 12:00:38 crc kubenswrapper[4699]: I0226 12:00:38.983470 4699 scope.go:117] "RemoveContainer" containerID="ffa425939368131f51ca5df0c799cff39019457552b4886c8f2b5719e7868319" Feb 26 12:00:41 crc kubenswrapper[4699]: I0226 12:00:41.584948 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:00:41 crc kubenswrapper[4699]: I0226 12:00:41.585535 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:00:41 crc kubenswrapper[4699]: I0226 12:00:41.585621 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 12:00:41 crc kubenswrapper[4699]: I0226 12:00:41.586912 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:00:41 crc kubenswrapper[4699]: I0226 12:00:41.587009 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" gracePeriod=600 Feb 26 12:00:41 crc kubenswrapper[4699]: E0226 12:00:41.709655 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:00:42 crc kubenswrapper[4699]: I0226 12:00:42.147396 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" exitCode=0 Feb 26 12:00:42 crc kubenswrapper[4699]: I0226 12:00:42.147479 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948"} Feb 26 12:00:42 crc kubenswrapper[4699]: I0226 12:00:42.147551 4699 scope.go:117] "RemoveContainer" containerID="4c58903dcb4a12909fcb0583a1d55149dc4c175867d594a32d984dde51ae536f" Feb 26 12:00:42 crc kubenswrapper[4699]: I0226 12:00:42.148861 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:00:42 crc kubenswrapper[4699]: E0226 12:00:42.149429 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:00:53 crc kubenswrapper[4699]: I0226 12:00:53.261318 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:00:53 crc kubenswrapper[4699]: E0226 12:00:53.262199 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.155355 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29535121-plvtd"] Feb 26 12:01:00 crc kubenswrapper[4699]: E0226 12:01:00.157360 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ea785f-e6f4-487c-9c19-f7bff53a2a12" containerName="collect-profiles" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.157451 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ea785f-e6f4-487c-9c19-f7bff53a2a12" containerName="collect-profiles" Feb 26 12:01:00 crc kubenswrapper[4699]: E0226 12:01:00.157510 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062171a4-9cf3-460e-822d-2dc7b5baaf9b" containerName="oc" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.157563 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="062171a4-9cf3-460e-822d-2dc7b5baaf9b" containerName="oc" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.157798 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ea785f-e6f4-487c-9c19-f7bff53a2a12" containerName="collect-profiles" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.157873 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="062171a4-9cf3-460e-822d-2dc7b5baaf9b" containerName="oc" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.158691 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.169770 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535121-plvtd"] Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.353870 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-config-data\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.353993 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-combined-ca-bundle\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.354148 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-fernet-keys\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.354185 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ng9p\" (UniqueName: \"kubernetes.io/projected/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-kube-api-access-6ng9p\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.455659 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-combined-ca-bundle\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.455715 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-fernet-keys\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.455732 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ng9p\" (UniqueName: \"kubernetes.io/projected/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-kube-api-access-6ng9p\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.455844 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-config-data\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.464212 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-combined-ca-bundle\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.464417 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-config-data\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.464839 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-fernet-keys\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.475892 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ng9p\" (UniqueName: \"kubernetes.io/projected/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-kube-api-access-6ng9p\") pod \"keystone-cron-29535121-plvtd\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.499709 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:00 crc kubenswrapper[4699]: I0226 12:01:00.956667 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535121-plvtd"] Feb 26 12:01:00 crc kubenswrapper[4699]: W0226 12:01:00.961411 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef1e8bd7_66e8_4eef_979e_8bf3e57b2a68.slice/crio-3d58b8dcce2fcc1b977dea1a22af6516e2315c5295cd925274627d773832fceb WatchSource:0}: Error finding container 3d58b8dcce2fcc1b977dea1a22af6516e2315c5295cd925274627d773832fceb: Status 404 returned error can't find the container with id 3d58b8dcce2fcc1b977dea1a22af6516e2315c5295cd925274627d773832fceb Feb 26 12:01:01 crc kubenswrapper[4699]: I0226 12:01:01.327850 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535121-plvtd" event={"ID":"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68","Type":"ContainerStarted","Data":"f82b7e69cb8fe7ef0bdf92eb3048b514e80df2fb3095107990fb1a608f73583a"} Feb 26 12:01:01 crc kubenswrapper[4699]: I0226 12:01:01.328455 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535121-plvtd" event={"ID":"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68","Type":"ContainerStarted","Data":"3d58b8dcce2fcc1b977dea1a22af6516e2315c5295cd925274627d773832fceb"} Feb 26 12:01:01 crc kubenswrapper[4699]: I0226 12:01:01.354722 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29535121-plvtd" podStartSLOduration=1.354690347 podStartE2EDuration="1.354690347s" podCreationTimestamp="2026-02-26 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 12:01:01.344929117 +0000 UTC m=+3007.155755571" watchObservedRunningTime="2026-02-26 12:01:01.354690347 +0000 UTC m=+3007.165516791" Feb 26 12:01:03 crc kubenswrapper[4699]: I0226 12:01:03.346695 4699 generic.go:334] "Generic (PLEG): container finished" podID="ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68" containerID="f82b7e69cb8fe7ef0bdf92eb3048b514e80df2fb3095107990fb1a608f73583a" exitCode=0 Feb 26 12:01:03 crc kubenswrapper[4699]: I0226 12:01:03.346785 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535121-plvtd" event={"ID":"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68","Type":"ContainerDied","Data":"f82b7e69cb8fe7ef0bdf92eb3048b514e80df2fb3095107990fb1a608f73583a"} Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.262023 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:01:04 crc kubenswrapper[4699]: E0226 12:01:04.262352 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.751931 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.851616 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ng9p\" (UniqueName: \"kubernetes.io/projected/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-kube-api-access-6ng9p\") pod \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.851711 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-config-data\") pod \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.851751 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-combined-ca-bundle\") pod \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.851805 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-fernet-keys\") pod \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\" (UID: \"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68\") " Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.857875 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68" (UID: "ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.871503 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-kube-api-access-6ng9p" (OuterVolumeSpecName: "kube-api-access-6ng9p") pod "ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68" (UID: "ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68"). InnerVolumeSpecName "kube-api-access-6ng9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.894553 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68" (UID: "ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.917810 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-config-data" (OuterVolumeSpecName: "config-data") pod "ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68" (UID: "ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.954763 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ng9p\" (UniqueName: \"kubernetes.io/projected/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-kube-api-access-6ng9p\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.954796 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.954805 4699 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:04 crc kubenswrapper[4699]: I0226 12:01:04.954813 4699 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:05 crc kubenswrapper[4699]: I0226 12:01:05.369234 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535121-plvtd" event={"ID":"ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68","Type":"ContainerDied","Data":"3d58b8dcce2fcc1b977dea1a22af6516e2315c5295cd925274627d773832fceb"} Feb 26 12:01:05 crc kubenswrapper[4699]: I0226 12:01:05.369702 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d58b8dcce2fcc1b977dea1a22af6516e2315c5295cd925274627d773832fceb" Feb 26 12:01:05 crc kubenswrapper[4699]: I0226 12:01:05.369309 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535121-plvtd" Feb 26 12:01:18 crc kubenswrapper[4699]: I0226 12:01:18.260782 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:01:18 crc kubenswrapper[4699]: E0226 12:01:18.261576 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:01:33 crc kubenswrapper[4699]: I0226 12:01:33.261234 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:01:33 crc kubenswrapper[4699]: E0226 12:01:33.262419 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:01:46 crc kubenswrapper[4699]: I0226 12:01:46.270380 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:01:46 crc kubenswrapper[4699]: E0226 12:01:46.271356 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.152052 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535122-xp67b"] Feb 26 12:02:00 crc kubenswrapper[4699]: E0226 12:02:00.153266 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68" containerName="keystone-cron" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.153284 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68" containerName="keystone-cron" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.153572 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68" containerName="keystone-cron" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.154378 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535122-xp67b" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.159663 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.159685 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.160468 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.162459 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535122-xp67b"] Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.268195 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg5h4\" (UniqueName: \"kubernetes.io/projected/27a271ab-4d30-4863-b3f6-74750cc65a91-kube-api-access-dg5h4\") pod \"auto-csr-approver-29535122-xp67b\" (UID: \"27a271ab-4d30-4863-b3f6-74750cc65a91\") " pod="openshift-infra/auto-csr-approver-29535122-xp67b" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.370426 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg5h4\" (UniqueName: \"kubernetes.io/projected/27a271ab-4d30-4863-b3f6-74750cc65a91-kube-api-access-dg5h4\") pod \"auto-csr-approver-29535122-xp67b\" (UID: \"27a271ab-4d30-4863-b3f6-74750cc65a91\") " pod="openshift-infra/auto-csr-approver-29535122-xp67b" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.393027 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg5h4\" (UniqueName: \"kubernetes.io/projected/27a271ab-4d30-4863-b3f6-74750cc65a91-kube-api-access-dg5h4\") pod \"auto-csr-approver-29535122-xp67b\" (UID: \"27a271ab-4d30-4863-b3f6-74750cc65a91\") " pod="openshift-infra/auto-csr-approver-29535122-xp67b" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.490848 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535122-xp67b" Feb 26 12:02:00 crc kubenswrapper[4699]: I0226 12:02:00.956147 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535122-xp67b"] Feb 26 12:02:01 crc kubenswrapper[4699]: I0226 12:02:01.031331 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535122-xp67b" event={"ID":"27a271ab-4d30-4863-b3f6-74750cc65a91","Type":"ContainerStarted","Data":"429c6a30970d5da958deab22c2e4bb6cf27c687d3ad8243aeacaea04a0d870dd"} Feb 26 12:02:01 crc kubenswrapper[4699]: I0226 12:02:01.260633 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:02:01 crc kubenswrapper[4699]: E0226 12:02:01.261011 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:02:03 crc kubenswrapper[4699]: I0226 12:02:03.070183 4699 generic.go:334] "Generic (PLEG): container finished" podID="27a271ab-4d30-4863-b3f6-74750cc65a91" containerID="f8af8d4fb65b858c79bdd65cde626e347dc9e20fd0df6dcb1821aae0c9ee9b41" exitCode=0 Feb 26 12:02:03 crc kubenswrapper[4699]: I0226 12:02:03.070289 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535122-xp67b" event={"ID":"27a271ab-4d30-4863-b3f6-74750cc65a91","Type":"ContainerDied","Data":"f8af8d4fb65b858c79bdd65cde626e347dc9e20fd0df6dcb1821aae0c9ee9b41"} Feb 26 12:02:04 crc kubenswrapper[4699]: I0226 12:02:04.468938 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535122-xp67b" Feb 26 12:02:04 crc kubenswrapper[4699]: I0226 12:02:04.650851 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg5h4\" (UniqueName: \"kubernetes.io/projected/27a271ab-4d30-4863-b3f6-74750cc65a91-kube-api-access-dg5h4\") pod \"27a271ab-4d30-4863-b3f6-74750cc65a91\" (UID: \"27a271ab-4d30-4863-b3f6-74750cc65a91\") " Feb 26 12:02:04 crc kubenswrapper[4699]: I0226 12:02:04.657999 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a271ab-4d30-4863-b3f6-74750cc65a91-kube-api-access-dg5h4" (OuterVolumeSpecName: "kube-api-access-dg5h4") pod "27a271ab-4d30-4863-b3f6-74750cc65a91" (UID: "27a271ab-4d30-4863-b3f6-74750cc65a91"). InnerVolumeSpecName "kube-api-access-dg5h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:02:04 crc kubenswrapper[4699]: I0226 12:02:04.753821 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg5h4\" (UniqueName: \"kubernetes.io/projected/27a271ab-4d30-4863-b3f6-74750cc65a91-kube-api-access-dg5h4\") on node \"crc\" DevicePath \"\"" Feb 26 12:02:05 crc kubenswrapper[4699]: I0226 12:02:05.093127 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535122-xp67b" event={"ID":"27a271ab-4d30-4863-b3f6-74750cc65a91","Type":"ContainerDied","Data":"429c6a30970d5da958deab22c2e4bb6cf27c687d3ad8243aeacaea04a0d870dd"} Feb 26 12:02:05 crc kubenswrapper[4699]: I0226 12:02:05.093169 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535122-xp67b" Feb 26 12:02:05 crc kubenswrapper[4699]: I0226 12:02:05.093183 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429c6a30970d5da958deab22c2e4bb6cf27c687d3ad8243aeacaea04a0d870dd" Feb 26 12:02:05 crc kubenswrapper[4699]: I0226 12:02:05.535952 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535116-fwrcj"] Feb 26 12:02:05 crc kubenswrapper[4699]: I0226 12:02:05.544802 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535116-fwrcj"] Feb 26 12:02:06 crc kubenswrapper[4699]: I0226 12:02:06.274250 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67852f2-cfab-4e51-b986-30f2a582877d" path="/var/lib/kubelet/pods/f67852f2-cfab-4e51-b986-30f2a582877d/volumes" Feb 26 12:02:14 crc kubenswrapper[4699]: I0226 12:02:14.261423 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:02:14 crc kubenswrapper[4699]: E0226 12:02:14.263826 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:02:26 crc kubenswrapper[4699]: I0226 12:02:26.270075 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:02:26 crc kubenswrapper[4699]: E0226 12:02:26.271247 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:02:39 crc kubenswrapper[4699]: I0226 12:02:39.164004 4699 scope.go:117] "RemoveContainer" containerID="97b5ef4eef61ea4aaf36ee8c050903fab28c7dee69a56263785c220e6a8c6292" Feb 26 12:02:41 crc kubenswrapper[4699]: I0226 12:02:41.260779 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:02:41 crc kubenswrapper[4699]: E0226 12:02:41.261472 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:02:55 crc kubenswrapper[4699]: I0226 12:02:55.261556 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:02:55 crc kubenswrapper[4699]: E0226 12:02:55.262570 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:03:08 crc kubenswrapper[4699]: I0226 12:03:08.260407 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:03:08 crc kubenswrapper[4699]: E0226 12:03:08.261218 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:03:20 crc kubenswrapper[4699]: I0226 12:03:20.261577 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:03:20 crc kubenswrapper[4699]: E0226 12:03:20.262425 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:03:34 crc kubenswrapper[4699]: I0226 12:03:34.424912 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:03:34 crc kubenswrapper[4699]: E0226 12:03:34.425681 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.082023 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8vqbk"] Feb 26 12:03:39 crc kubenswrapper[4699]: E0226 12:03:39.083415 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a271ab-4d30-4863-b3f6-74750cc65a91" containerName="oc" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.083440 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a271ab-4d30-4863-b3f6-74750cc65a91" containerName="oc" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.083743 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a271ab-4d30-4863-b3f6-74750cc65a91" containerName="oc" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.086019 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.092701 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8vqbk"] Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.207469 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxc8\" (UniqueName: \"kubernetes.io/projected/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-kube-api-access-wvxc8\") pod \"community-operators-8vqbk\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.207516 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-utilities\") pod \"community-operators-8vqbk\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.207599 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-catalog-content\") pod \"community-operators-8vqbk\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.309831 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-catalog-content\") pod \"community-operators-8vqbk\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.310263 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvxc8\" (UniqueName: \"kubernetes.io/projected/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-kube-api-access-wvxc8\") pod \"community-operators-8vqbk\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.310343 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-catalog-content\") pod \"community-operators-8vqbk\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.310462 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-utilities\") pod \"community-operators-8vqbk\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.310791 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-utilities\") pod \"community-operators-8vqbk\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.329762 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvxc8\" (UniqueName: \"kubernetes.io/projected/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-kube-api-access-wvxc8\") pod \"community-operators-8vqbk\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.406243 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:39 crc kubenswrapper[4699]: I0226 12:03:39.960230 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8vqbk"] Feb 26 12:03:40 crc kubenswrapper[4699]: I0226 12:03:40.115642 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqbk" event={"ID":"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22","Type":"ContainerStarted","Data":"c60a2faa98f74227c56a6cae4e1cd0d9f59ceb45b2853ca42298e281b80a3b6c"} Feb 26 12:03:41 crc kubenswrapper[4699]: I0226 12:03:41.125306 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" containerID="654fc2f37bae8ca8d3c43bcce6dc15128e8e0a662bbe83f02c4ba7d9f7bc3ba0" exitCode=0 Feb 26 12:03:41 crc kubenswrapper[4699]: I0226 12:03:41.125369 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqbk" event={"ID":"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22","Type":"ContainerDied","Data":"654fc2f37bae8ca8d3c43bcce6dc15128e8e0a662bbe83f02c4ba7d9f7bc3ba0"} Feb 26 12:03:41 crc kubenswrapper[4699]: I0226 12:03:41.127328 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:03:42 crc kubenswrapper[4699]: I0226 12:03:42.136178 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqbk" event={"ID":"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22","Type":"ContainerStarted","Data":"fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed"} Feb 26 12:03:43 crc kubenswrapper[4699]: I0226 12:03:43.145239 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" containerID="fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed" exitCode=0 Feb 26 12:03:43 crc kubenswrapper[4699]: I0226 12:03:43.145358 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqbk" event={"ID":"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22","Type":"ContainerDied","Data":"fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed"} Feb 26 12:03:44 crc kubenswrapper[4699]: I0226 12:03:44.164668 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqbk" event={"ID":"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22","Type":"ContainerStarted","Data":"f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5"} Feb 26 12:03:44 crc kubenswrapper[4699]: I0226 12:03:44.184646 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8vqbk" podStartSLOduration=2.73788143 podStartE2EDuration="5.184601837s" podCreationTimestamp="2026-02-26 12:03:39 +0000 UTC" firstStartedPulling="2026-02-26 12:03:41.127039089 +0000 UTC m=+3166.937865523" lastFinishedPulling="2026-02-26 12:03:43.573759496 +0000 UTC m=+3169.384585930" observedRunningTime="2026-02-26 12:03:44.18153607 +0000 UTC m=+3169.992362504" watchObservedRunningTime="2026-02-26 12:03:44.184601837 +0000 UTC m=+3169.995428281" Feb 26 12:03:48 crc kubenswrapper[4699]: I0226 12:03:48.261024 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:03:48 crc kubenswrapper[4699]: E0226 12:03:48.262005 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:03:49 crc kubenswrapper[4699]: I0226 12:03:49.406626 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:49 crc kubenswrapper[4699]: I0226 12:03:49.406942 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:49 crc kubenswrapper[4699]: I0226 12:03:49.454408 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:50 crc kubenswrapper[4699]: I0226 12:03:50.253666 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:50 crc kubenswrapper[4699]: I0226 12:03:50.308531 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8vqbk"] Feb 26 12:03:52 crc kubenswrapper[4699]: I0226 12:03:52.227926 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8vqbk" podUID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" containerName="registry-server" containerID="cri-o://f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5" gracePeriod=2 Feb 26 12:03:52 crc kubenswrapper[4699]: I0226 12:03:52.736308 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:52 crc kubenswrapper[4699]: I0226 12:03:52.875026 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-utilities\") pod \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " Feb 26 12:03:52 crc kubenswrapper[4699]: I0226 12:03:52.875087 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-catalog-content\") pod \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " Feb 26 12:03:52 crc kubenswrapper[4699]: I0226 12:03:52.875210 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvxc8\" (UniqueName: \"kubernetes.io/projected/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-kube-api-access-wvxc8\") pod \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\" (UID: \"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22\") " Feb 26 12:03:52 crc kubenswrapper[4699]: I0226 12:03:52.876507 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-utilities" (OuterVolumeSpecName: "utilities") pod "e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" (UID: "e2958cda-c404-4b0e-a6b4-e32bdd5b2b22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:03:52 crc kubenswrapper[4699]: I0226 12:03:52.880954 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-kube-api-access-wvxc8" (OuterVolumeSpecName: "kube-api-access-wvxc8") pod "e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" (UID: "e2958cda-c404-4b0e-a6b4-e32bdd5b2b22"). InnerVolumeSpecName "kube-api-access-wvxc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:03:52 crc kubenswrapper[4699]: I0226 12:03:52.934002 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" (UID: "e2958cda-c404-4b0e-a6b4-e32bdd5b2b22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:03:52 crc kubenswrapper[4699]: I0226 12:03:52.976814 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:03:52 crc kubenswrapper[4699]: I0226 12:03:52.976842 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:03:52 crc kubenswrapper[4699]: I0226 12:03:52.976853 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvxc8\" (UniqueName: \"kubernetes.io/projected/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22-kube-api-access-wvxc8\") on node \"crc\" DevicePath \"\"" Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.239578 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" containerID="f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5" exitCode=0 Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.239629 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqbk" event={"ID":"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22","Type":"ContainerDied","Data":"f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5"} Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.239657 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqbk" event={"ID":"e2958cda-c404-4b0e-a6b4-e32bdd5b2b22","Type":"ContainerDied","Data":"c60a2faa98f74227c56a6cae4e1cd0d9f59ceb45b2853ca42298e281b80a3b6c"} Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.239677 4699 scope.go:117] "RemoveContainer" containerID="f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5" Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.239836 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vqbk" Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.291084 4699 scope.go:117] "RemoveContainer" containerID="fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed" Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.301205 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8vqbk"] Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.317826 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8vqbk"] Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.330748 4699 scope.go:117] "RemoveContainer" containerID="654fc2f37bae8ca8d3c43bcce6dc15128e8e0a662bbe83f02c4ba7d9f7bc3ba0" Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.364917 4699 scope.go:117] "RemoveContainer" containerID="f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5" Feb 26 12:03:53 crc kubenswrapper[4699]: E0226 12:03:53.365678 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5\": container with ID starting with f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5 not found: ID does not exist" containerID="f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5" Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.365751 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5"} err="failed to get container status \"f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5\": rpc error: code = NotFound desc = could not find container \"f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5\": container with ID starting with f4243150cc96c5ee66d4d44a57548498e0161326cbcaf1706a67a62345ae76a5 not found: ID does not exist" Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.365782 4699 scope.go:117] "RemoveContainer" containerID="fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed" Feb 26 12:03:53 crc kubenswrapper[4699]: E0226 12:03:53.366301 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed\": container with ID starting with fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed not found: ID does not exist" containerID="fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed" Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.366338 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed"} err="failed to get container status \"fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed\": rpc error: code = NotFound desc = could not find container \"fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed\": container with ID starting with fa8520e59eea88936b0e5fd507b8acacf32d49bb81261491b76e719b13c07aed not found: ID does not exist" Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.366360 4699 scope.go:117] "RemoveContainer" containerID="654fc2f37bae8ca8d3c43bcce6dc15128e8e0a662bbe83f02c4ba7d9f7bc3ba0" Feb 26 12:03:53 crc kubenswrapper[4699]: E0226 12:03:53.366682 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654fc2f37bae8ca8d3c43bcce6dc15128e8e0a662bbe83f02c4ba7d9f7bc3ba0\": container with ID starting with 654fc2f37bae8ca8d3c43bcce6dc15128e8e0a662bbe83f02c4ba7d9f7bc3ba0 not found: ID does not exist" containerID="654fc2f37bae8ca8d3c43bcce6dc15128e8e0a662bbe83f02c4ba7d9f7bc3ba0" Feb 26 12:03:53 crc kubenswrapper[4699]: I0226 12:03:53.366704 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654fc2f37bae8ca8d3c43bcce6dc15128e8e0a662bbe83f02c4ba7d9f7bc3ba0"} err="failed to get container status \"654fc2f37bae8ca8d3c43bcce6dc15128e8e0a662bbe83f02c4ba7d9f7bc3ba0\": rpc error: code = NotFound desc = could not find container \"654fc2f37bae8ca8d3c43bcce6dc15128e8e0a662bbe83f02c4ba7d9f7bc3ba0\": container with ID starting with 654fc2f37bae8ca8d3c43bcce6dc15128e8e0a662bbe83f02c4ba7d9f7bc3ba0 not found: ID does not exist" Feb 26 12:03:54 crc kubenswrapper[4699]: I0226 12:03:54.282494 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" path="/var/lib/kubelet/pods/e2958cda-c404-4b0e-a6b4-e32bdd5b2b22/volumes" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.159411 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535124-gqg7z"] Feb 26 12:04:00 crc kubenswrapper[4699]: E0226 12:04:00.160619 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" containerName="extract-content" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.160644 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" containerName="extract-content" Feb 26 12:04:00 crc kubenswrapper[4699]: E0226 12:04:00.160665 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" containerName="registry-server" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.160676 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" containerName="registry-server" Feb 26 12:04:00 crc kubenswrapper[4699]: E0226 12:04:00.160750 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" containerName="extract-utilities" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.160764 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" containerName="extract-utilities" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.161128 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2958cda-c404-4b0e-a6b4-e32bdd5b2b22" containerName="registry-server" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.162228 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535124-gqg7z" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.164633 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.164644 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.165083 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.168824 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535124-gqg7z"] Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.324350 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-968p8\" (UniqueName: \"kubernetes.io/projected/af10706a-2423-4bb2-b0a5-de33b64b4b64-kube-api-access-968p8\") pod \"auto-csr-approver-29535124-gqg7z\" (UID: \"af10706a-2423-4bb2-b0a5-de33b64b4b64\") " pod="openshift-infra/auto-csr-approver-29535124-gqg7z" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.426379 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-968p8\" (UniqueName: \"kubernetes.io/projected/af10706a-2423-4bb2-b0a5-de33b64b4b64-kube-api-access-968p8\") pod \"auto-csr-approver-29535124-gqg7z\" (UID: \"af10706a-2423-4bb2-b0a5-de33b64b4b64\") " pod="openshift-infra/auto-csr-approver-29535124-gqg7z" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.444824 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-968p8\" (UniqueName: \"kubernetes.io/projected/af10706a-2423-4bb2-b0a5-de33b64b4b64-kube-api-access-968p8\") pod \"auto-csr-approver-29535124-gqg7z\" (UID: \"af10706a-2423-4bb2-b0a5-de33b64b4b64\") " pod="openshift-infra/auto-csr-approver-29535124-gqg7z" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.480916 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535124-gqg7z" Feb 26 12:04:00 crc kubenswrapper[4699]: I0226 12:04:00.911465 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535124-gqg7z"] Feb 26 12:04:01 crc kubenswrapper[4699]: I0226 12:04:01.261630 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:04:01 crc kubenswrapper[4699]: E0226 12:04:01.261949 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:04:01 crc kubenswrapper[4699]: I0226 12:04:01.407042 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535124-gqg7z" event={"ID":"af10706a-2423-4bb2-b0a5-de33b64b4b64","Type":"ContainerStarted","Data":"918aa5b648841b62f88b5b7c296ae0cea3220bd06e5b4ae982efbab913fb89fa"} Feb 26 12:04:03 crc kubenswrapper[4699]: I0226 12:04:03.425066 4699 generic.go:334] "Generic (PLEG): container finished" podID="af10706a-2423-4bb2-b0a5-de33b64b4b64" containerID="bee6179034d0d615200cc2b0cca46b2b7ac3bbc955a96024e317fe4212ffc149" exitCode=0 Feb 26 12:04:03 crc kubenswrapper[4699]: I0226 12:04:03.425550 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535124-gqg7z" event={"ID":"af10706a-2423-4bb2-b0a5-de33b64b4b64","Type":"ContainerDied","Data":"bee6179034d0d615200cc2b0cca46b2b7ac3bbc955a96024e317fe4212ffc149"} Feb 26 12:04:04 crc kubenswrapper[4699]: I0226 12:04:04.907904 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535124-gqg7z" Feb 26 12:04:05 crc kubenswrapper[4699]: I0226 12:04:05.028768 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-968p8\" (UniqueName: \"kubernetes.io/projected/af10706a-2423-4bb2-b0a5-de33b64b4b64-kube-api-access-968p8\") pod \"af10706a-2423-4bb2-b0a5-de33b64b4b64\" (UID: \"af10706a-2423-4bb2-b0a5-de33b64b4b64\") " Feb 26 12:04:05 crc kubenswrapper[4699]: I0226 12:04:05.037340 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af10706a-2423-4bb2-b0a5-de33b64b4b64-kube-api-access-968p8" (OuterVolumeSpecName: "kube-api-access-968p8") pod "af10706a-2423-4bb2-b0a5-de33b64b4b64" (UID: "af10706a-2423-4bb2-b0a5-de33b64b4b64"). InnerVolumeSpecName "kube-api-access-968p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:04:05 crc kubenswrapper[4699]: I0226 12:04:05.131920 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-968p8\" (UniqueName: \"kubernetes.io/projected/af10706a-2423-4bb2-b0a5-de33b64b4b64-kube-api-access-968p8\") on node \"crc\" DevicePath \"\"" Feb 26 12:04:05 crc kubenswrapper[4699]: I0226 12:04:05.442405 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535124-gqg7z" event={"ID":"af10706a-2423-4bb2-b0a5-de33b64b4b64","Type":"ContainerDied","Data":"918aa5b648841b62f88b5b7c296ae0cea3220bd06e5b4ae982efbab913fb89fa"} Feb 26 12:04:05 crc kubenswrapper[4699]: I0226 12:04:05.442446 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="918aa5b648841b62f88b5b7c296ae0cea3220bd06e5b4ae982efbab913fb89fa" Feb 26 12:04:05 crc kubenswrapper[4699]: I0226 12:04:05.442514 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535124-gqg7z" Feb 26 12:04:05 crc kubenswrapper[4699]: I0226 12:04:05.974505 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535118-n92bn"] Feb 26 12:04:05 crc kubenswrapper[4699]: I0226 12:04:05.983234 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535118-n92bn"] Feb 26 12:04:06 crc kubenswrapper[4699]: I0226 12:04:06.273026 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bde379e-7dd7-4b4b-bc25-b83d0174b100" path="/var/lib/kubelet/pods/6bde379e-7dd7-4b4b-bc25-b83d0174b100/volumes" Feb 26 12:04:15 crc kubenswrapper[4699]: I0226 12:04:15.261030 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:04:15 crc kubenswrapper[4699]: E0226 12:04:15.262018 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:04:27 crc kubenswrapper[4699]: I0226 12:04:27.262204 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:04:27 crc kubenswrapper[4699]: E0226 12:04:27.263678 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:04:38 crc kubenswrapper[4699]: I0226 12:04:38.261410 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:04:38 crc kubenswrapper[4699]: E0226 12:04:38.262282 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:04:39 crc kubenswrapper[4699]: I0226 12:04:39.263483 4699 scope.go:117] "RemoveContainer" containerID="7ee1327d152002290262452d2af09136d94e1e411a1eeb32531cce9b1d48c20c" Feb 26 12:04:50 crc kubenswrapper[4699]: I0226 12:04:50.263621 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:04:50 crc kubenswrapper[4699]: E0226 12:04:50.264501 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:05:02 crc kubenswrapper[4699]: I0226 12:05:02.278851 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:05:02 crc kubenswrapper[4699]: E0226 12:05:02.279638 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:05:16 crc kubenswrapper[4699]: I0226 12:05:16.267316 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:05:16 crc kubenswrapper[4699]: E0226 12:05:16.268298 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:05:31 crc kubenswrapper[4699]: I0226 12:05:31.260899 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:05:31 crc kubenswrapper[4699]: E0226 12:05:31.261750 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:05:44 crc kubenswrapper[4699]: I0226 12:05:44.260629 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:05:45 crc kubenswrapper[4699]: I0226 12:05:45.336213 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"243333360a6594f8acbe71e1e9197448e74ac1a0258671779fb6af974ca032dd"} Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.207473 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535126-n7gpm"] Feb 26 12:06:00 crc kubenswrapper[4699]: E0226 12:06:00.208400 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af10706a-2423-4bb2-b0a5-de33b64b4b64" containerName="oc" Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.208416 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="af10706a-2423-4bb2-b0a5-de33b64b4b64" containerName="oc" Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.208596 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="af10706a-2423-4bb2-b0a5-de33b64b4b64" containerName="oc" Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.209246 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535126-n7gpm" Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.211501 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.211501 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.211556 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.231613 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535126-n7gpm"] Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.360253 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxqxp\" (UniqueName: \"kubernetes.io/projected/d62b893f-dc84-4f3a-9c62-5c49c65be99f-kube-api-access-jxqxp\") pod \"auto-csr-approver-29535126-n7gpm\" (UID: \"d62b893f-dc84-4f3a-9c62-5c49c65be99f\") " pod="openshift-infra/auto-csr-approver-29535126-n7gpm" Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.462850 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxqxp\" (UniqueName: \"kubernetes.io/projected/d62b893f-dc84-4f3a-9c62-5c49c65be99f-kube-api-access-jxqxp\") pod \"auto-csr-approver-29535126-n7gpm\" (UID: \"d62b893f-dc84-4f3a-9c62-5c49c65be99f\") " pod="openshift-infra/auto-csr-approver-29535126-n7gpm" Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.483003 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxqxp\" (UniqueName: \"kubernetes.io/projected/d62b893f-dc84-4f3a-9c62-5c49c65be99f-kube-api-access-jxqxp\") pod \"auto-csr-approver-29535126-n7gpm\" (UID: \"d62b893f-dc84-4f3a-9c62-5c49c65be99f\") " pod="openshift-infra/auto-csr-approver-29535126-n7gpm" Feb 26 12:06:00 crc kubenswrapper[4699]: I0226 12:06:00.531909 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535126-n7gpm" Feb 26 12:06:01 crc kubenswrapper[4699]: I0226 12:06:01.028463 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535126-n7gpm"] Feb 26 12:06:01 crc kubenswrapper[4699]: W0226 12:06:01.032312 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd62b893f_dc84_4f3a_9c62_5c49c65be99f.slice/crio-dc3a2105108f55c117666ba4cf91597c2e85da666a181b7ff3883c021b5428a4 WatchSource:0}: Error finding container dc3a2105108f55c117666ba4cf91597c2e85da666a181b7ff3883c021b5428a4: Status 404 returned error can't find the container with id dc3a2105108f55c117666ba4cf91597c2e85da666a181b7ff3883c021b5428a4 Feb 26 12:06:01 crc kubenswrapper[4699]: I0226 12:06:01.469539 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535126-n7gpm" event={"ID":"d62b893f-dc84-4f3a-9c62-5c49c65be99f","Type":"ContainerStarted","Data":"dc3a2105108f55c117666ba4cf91597c2e85da666a181b7ff3883c021b5428a4"} Feb 26 12:06:03 crc kubenswrapper[4699]: I0226 12:06:03.488495 4699 generic.go:334] "Generic (PLEG): container finished" podID="d62b893f-dc84-4f3a-9c62-5c49c65be99f" containerID="0a9b5f9a5f2d730b937d8d7362f22b7e6fe3edad8ecb5523a71d611f339c4a8e" exitCode=0 Feb 26 12:06:03 crc kubenswrapper[4699]: I0226 12:06:03.488584 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535126-n7gpm" event={"ID":"d62b893f-dc84-4f3a-9c62-5c49c65be99f","Type":"ContainerDied","Data":"0a9b5f9a5f2d730b937d8d7362f22b7e6fe3edad8ecb5523a71d611f339c4a8e"} Feb 26 12:06:04 crc kubenswrapper[4699]: I0226 12:06:04.904413 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535126-n7gpm" Feb 26 12:06:05 crc kubenswrapper[4699]: I0226 12:06:05.059633 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxqxp\" (UniqueName: \"kubernetes.io/projected/d62b893f-dc84-4f3a-9c62-5c49c65be99f-kube-api-access-jxqxp\") pod \"d62b893f-dc84-4f3a-9c62-5c49c65be99f\" (UID: \"d62b893f-dc84-4f3a-9c62-5c49c65be99f\") " Feb 26 12:06:05 crc kubenswrapper[4699]: I0226 12:06:05.065811 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d62b893f-dc84-4f3a-9c62-5c49c65be99f-kube-api-access-jxqxp" (OuterVolumeSpecName: "kube-api-access-jxqxp") pod "d62b893f-dc84-4f3a-9c62-5c49c65be99f" (UID: "d62b893f-dc84-4f3a-9c62-5c49c65be99f"). InnerVolumeSpecName "kube-api-access-jxqxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:06:05 crc kubenswrapper[4699]: I0226 12:06:05.162261 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxqxp\" (UniqueName: \"kubernetes.io/projected/d62b893f-dc84-4f3a-9c62-5c49c65be99f-kube-api-access-jxqxp\") on node \"crc\" DevicePath \"\"" Feb 26 12:06:05 crc kubenswrapper[4699]: I0226 12:06:05.508915 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535126-n7gpm" event={"ID":"d62b893f-dc84-4f3a-9c62-5c49c65be99f","Type":"ContainerDied","Data":"dc3a2105108f55c117666ba4cf91597c2e85da666a181b7ff3883c021b5428a4"} Feb 26 12:06:05 crc kubenswrapper[4699]: I0226 12:06:05.509225 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc3a2105108f55c117666ba4cf91597c2e85da666a181b7ff3883c021b5428a4" Feb 26 12:06:05 crc kubenswrapper[4699]: I0226 12:06:05.508973 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535126-n7gpm" Feb 26 12:06:05 crc kubenswrapper[4699]: I0226 12:06:05.981433 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535120-xkftf"] Feb 26 12:06:05 crc kubenswrapper[4699]: I0226 12:06:05.991106 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535120-xkftf"] Feb 26 12:06:06 crc kubenswrapper[4699]: I0226 12:06:06.272590 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062171a4-9cf3-460e-822d-2dc7b5baaf9b" path="/var/lib/kubelet/pods/062171a4-9cf3-460e-822d-2dc7b5baaf9b/volumes" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.198970 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zjml6"] Feb 26 12:06:10 crc kubenswrapper[4699]: E0226 12:06:10.200077 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62b893f-dc84-4f3a-9c62-5c49c65be99f" containerName="oc" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.200098 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62b893f-dc84-4f3a-9c62-5c49c65be99f" containerName="oc" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.200408 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="d62b893f-dc84-4f3a-9c62-5c49c65be99f" containerName="oc" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.201759 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.214240 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjml6"] Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.234491 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-utilities\") pod \"certified-operators-zjml6\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.234728 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-catalog-content\") pod \"certified-operators-zjml6\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.234804 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjkp\" (UniqueName: \"kubernetes.io/projected/9d1592fb-fd51-4a05-ac77-6094fe72263b-kube-api-access-8zjkp\") pod \"certified-operators-zjml6\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.336039 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-utilities\") pod \"certified-operators-zjml6\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.336419 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-catalog-content\") pod \"certified-operators-zjml6\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.336469 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zjkp\" (UniqueName: \"kubernetes.io/projected/9d1592fb-fd51-4a05-ac77-6094fe72263b-kube-api-access-8zjkp\") pod \"certified-operators-zjml6\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.336790 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-utilities\") pod \"certified-operators-zjml6\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.337081 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-catalog-content\") pod \"certified-operators-zjml6\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.357689 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zjkp\" (UniqueName: \"kubernetes.io/projected/9d1592fb-fd51-4a05-ac77-6094fe72263b-kube-api-access-8zjkp\") pod \"certified-operators-zjml6\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:10 crc kubenswrapper[4699]: I0226 12:06:10.523777 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:11 crc kubenswrapper[4699]: I0226 12:06:11.036556 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjml6"] Feb 26 12:06:11 crc kubenswrapper[4699]: I0226 12:06:11.566641 4699 generic.go:334] "Generic (PLEG): container finished" podID="9d1592fb-fd51-4a05-ac77-6094fe72263b" containerID="ba60cd8917eafa6b99bbae2da6aee01e1b66669a8cfe9e66cb7282c9c7dbc3db" exitCode=0 Feb 26 12:06:11 crc kubenswrapper[4699]: I0226 12:06:11.566698 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjml6" event={"ID":"9d1592fb-fd51-4a05-ac77-6094fe72263b","Type":"ContainerDied","Data":"ba60cd8917eafa6b99bbae2da6aee01e1b66669a8cfe9e66cb7282c9c7dbc3db"} Feb 26 12:06:11 crc kubenswrapper[4699]: I0226 12:06:11.567001 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjml6" event={"ID":"9d1592fb-fd51-4a05-ac77-6094fe72263b","Type":"ContainerStarted","Data":"d39ed2334cbf7e63d9cedce8990750b2dd76b1f97bcd1a16679cb77c3660aa1e"} Feb 26 12:06:12 crc kubenswrapper[4699]: I0226 12:06:12.604510 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjml6" event={"ID":"9d1592fb-fd51-4a05-ac77-6094fe72263b","Type":"ContainerStarted","Data":"8fa98d33fc879620f19d54870e7f72b49540998de1c851b86a918b5e6e0ac2c7"} Feb 26 12:06:13 crc kubenswrapper[4699]: I0226 12:06:13.614939 4699 generic.go:334] "Generic (PLEG): container finished" podID="9d1592fb-fd51-4a05-ac77-6094fe72263b" containerID="8fa98d33fc879620f19d54870e7f72b49540998de1c851b86a918b5e6e0ac2c7" exitCode=0 Feb 26 12:06:13 crc kubenswrapper[4699]: I0226 12:06:13.614970 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjml6" event={"ID":"9d1592fb-fd51-4a05-ac77-6094fe72263b","Type":"ContainerDied","Data":"8fa98d33fc879620f19d54870e7f72b49540998de1c851b86a918b5e6e0ac2c7"} Feb 26 12:06:17 crc kubenswrapper[4699]: I0226 12:06:17.655728 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjml6" event={"ID":"9d1592fb-fd51-4a05-ac77-6094fe72263b","Type":"ContainerStarted","Data":"edbe3597e20e2d7f855f28a4fbb01d8f5d1116713fa18a3bb3297d50952c9300"} Feb 26 12:06:17 crc kubenswrapper[4699]: I0226 12:06:17.676170 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zjml6" podStartSLOduration=2.466819862 podStartE2EDuration="7.676148113s" podCreationTimestamp="2026-02-26 12:06:10 +0000 UTC" firstStartedPulling="2026-02-26 12:06:11.56851023 +0000 UTC m=+3317.379336664" lastFinishedPulling="2026-02-26 12:06:16.777838481 +0000 UTC m=+3322.588664915" observedRunningTime="2026-02-26 12:06:17.674353202 +0000 UTC m=+3323.485179646" watchObservedRunningTime="2026-02-26 12:06:17.676148113 +0000 UTC m=+3323.486974547" Feb 26 12:06:20 crc kubenswrapper[4699]: I0226 12:06:20.524465 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:20 crc kubenswrapper[4699]: I0226 12:06:20.525088 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:20 crc kubenswrapper[4699]: I0226 12:06:20.573056 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.420326 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p28sp"] Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.433985 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p28sp"] Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.434145 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.525954 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-utilities\") pod \"redhat-marketplace-p28sp\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.526028 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-catalog-content\") pod \"redhat-marketplace-p28sp\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.526219 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsd4j\" (UniqueName: \"kubernetes.io/projected/e885b191-c6b8-4780-bc61-eaae0d82ad32-kube-api-access-jsd4j\") pod \"redhat-marketplace-p28sp\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.628439 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsd4j\" (UniqueName: \"kubernetes.io/projected/e885b191-c6b8-4780-bc61-eaae0d82ad32-kube-api-access-jsd4j\") pod \"redhat-marketplace-p28sp\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.628596 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-utilities\") pod \"redhat-marketplace-p28sp\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.628647 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-catalog-content\") pod \"redhat-marketplace-p28sp\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.629081 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-catalog-content\") pod \"redhat-marketplace-p28sp\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.629174 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-utilities\") pod \"redhat-marketplace-p28sp\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.646987 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsd4j\" (UniqueName: \"kubernetes.io/projected/e885b191-c6b8-4780-bc61-eaae0d82ad32-kube-api-access-jsd4j\") pod \"redhat-marketplace-p28sp\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:29 crc kubenswrapper[4699]: I0226 12:06:29.756560 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:30 crc kubenswrapper[4699]: I0226 12:06:30.291986 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p28sp"] Feb 26 12:06:30 crc kubenswrapper[4699]: I0226 12:06:30.571578 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:30 crc kubenswrapper[4699]: I0226 12:06:30.813036 4699 generic.go:334] "Generic (PLEG): container finished" podID="e885b191-c6b8-4780-bc61-eaae0d82ad32" containerID="8160ec562f5b448ada6b4d935ee7e1221ceccd01d09aa2645648d7aa891ef261" exitCode=0 Feb 26 12:06:30 crc kubenswrapper[4699]: I0226 12:06:30.813091 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p28sp" event={"ID":"e885b191-c6b8-4780-bc61-eaae0d82ad32","Type":"ContainerDied","Data":"8160ec562f5b448ada6b4d935ee7e1221ceccd01d09aa2645648d7aa891ef261"} Feb 26 12:06:30 crc kubenswrapper[4699]: I0226 12:06:30.813137 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p28sp" event={"ID":"e885b191-c6b8-4780-bc61-eaae0d82ad32","Type":"ContainerStarted","Data":"909ae1a31f3b190a5c879bc6499a8426947ec5b29159b933f594b746178602d1"} Feb 26 12:06:32 crc kubenswrapper[4699]: I0226 12:06:32.836380 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p28sp" event={"ID":"e885b191-c6b8-4780-bc61-eaae0d82ad32","Type":"ContainerStarted","Data":"dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75"} Feb 26 12:06:32 crc kubenswrapper[4699]: I0226 12:06:32.996993 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjml6"] Feb 26 12:06:32 crc kubenswrapper[4699]: I0226 12:06:32.997632 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zjml6" podUID="9d1592fb-fd51-4a05-ac77-6094fe72263b" containerName="registry-server" containerID="cri-o://edbe3597e20e2d7f855f28a4fbb01d8f5d1116713fa18a3bb3297d50952c9300" gracePeriod=2 Feb 26 12:06:33 crc kubenswrapper[4699]: I0226 12:06:33.855099 4699 generic.go:334] "Generic (PLEG): container finished" podID="e885b191-c6b8-4780-bc61-eaae0d82ad32" containerID="dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75" exitCode=0 Feb 26 12:06:33 crc kubenswrapper[4699]: I0226 12:06:33.855293 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p28sp" event={"ID":"e885b191-c6b8-4780-bc61-eaae0d82ad32","Type":"ContainerDied","Data":"dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75"} Feb 26 12:06:33 crc kubenswrapper[4699]: I0226 12:06:33.863588 4699 generic.go:334] "Generic (PLEG): container finished" podID="9d1592fb-fd51-4a05-ac77-6094fe72263b" containerID="edbe3597e20e2d7f855f28a4fbb01d8f5d1116713fa18a3bb3297d50952c9300" exitCode=0 Feb 26 12:06:33 crc kubenswrapper[4699]: I0226 12:06:33.863639 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjml6" event={"ID":"9d1592fb-fd51-4a05-ac77-6094fe72263b","Type":"ContainerDied","Data":"edbe3597e20e2d7f855f28a4fbb01d8f5d1116713fa18a3bb3297d50952c9300"} Feb 26 12:06:33 crc kubenswrapper[4699]: I0226 12:06:33.978410 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.114134 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-catalog-content\") pod \"9d1592fb-fd51-4a05-ac77-6094fe72263b\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.114212 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-utilities\") pod \"9d1592fb-fd51-4a05-ac77-6094fe72263b\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.114424 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zjkp\" (UniqueName: \"kubernetes.io/projected/9d1592fb-fd51-4a05-ac77-6094fe72263b-kube-api-access-8zjkp\") pod \"9d1592fb-fd51-4a05-ac77-6094fe72263b\" (UID: \"9d1592fb-fd51-4a05-ac77-6094fe72263b\") " Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.115299 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-utilities" (OuterVolumeSpecName: "utilities") pod "9d1592fb-fd51-4a05-ac77-6094fe72263b" (UID: "9d1592fb-fd51-4a05-ac77-6094fe72263b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.120314 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d1592fb-fd51-4a05-ac77-6094fe72263b-kube-api-access-8zjkp" (OuterVolumeSpecName: "kube-api-access-8zjkp") pod "9d1592fb-fd51-4a05-ac77-6094fe72263b" (UID: "9d1592fb-fd51-4a05-ac77-6094fe72263b"). InnerVolumeSpecName "kube-api-access-8zjkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.168419 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d1592fb-fd51-4a05-ac77-6094fe72263b" (UID: "9d1592fb-fd51-4a05-ac77-6094fe72263b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.216848 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.216889 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1592fb-fd51-4a05-ac77-6094fe72263b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.216899 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zjkp\" (UniqueName: \"kubernetes.io/projected/9d1592fb-fd51-4a05-ac77-6094fe72263b-kube-api-access-8zjkp\") on node \"crc\" DevicePath \"\"" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.875919 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p28sp" event={"ID":"e885b191-c6b8-4780-bc61-eaae0d82ad32","Type":"ContainerStarted","Data":"ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb"} Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.879091 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjml6" event={"ID":"9d1592fb-fd51-4a05-ac77-6094fe72263b","Type":"ContainerDied","Data":"d39ed2334cbf7e63d9cedce8990750b2dd76b1f97bcd1a16679cb77c3660aa1e"} Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.879160 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjml6" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.879163 4699 scope.go:117] "RemoveContainer" containerID="edbe3597e20e2d7f855f28a4fbb01d8f5d1116713fa18a3bb3297d50952c9300" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.898468 4699 scope.go:117] "RemoveContainer" containerID="8fa98d33fc879620f19d54870e7f72b49540998de1c851b86a918b5e6e0ac2c7" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.902583 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p28sp" podStartSLOduration=2.262914161 podStartE2EDuration="5.902563085s" podCreationTimestamp="2026-02-26 12:06:29 +0000 UTC" firstStartedPulling="2026-02-26 12:06:30.814815924 +0000 UTC m=+3336.625642358" lastFinishedPulling="2026-02-26 12:06:34.454464848 +0000 UTC m=+3340.265291282" observedRunningTime="2026-02-26 12:06:34.898627404 +0000 UTC m=+3340.709453838" watchObservedRunningTime="2026-02-26 12:06:34.902563085 +0000 UTC m=+3340.713389519" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.945865 4699 scope.go:117] "RemoveContainer" containerID="ba60cd8917eafa6b99bbae2da6aee01e1b66669a8cfe9e66cb7282c9c7dbc3db" Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.956977 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjml6"] Feb 26 12:06:34 crc kubenswrapper[4699]: I0226 12:06:34.965634 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zjml6"] Feb 26 12:06:36 crc kubenswrapper[4699]: I0226 12:06:36.272392 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d1592fb-fd51-4a05-ac77-6094fe72263b" path="/var/lib/kubelet/pods/9d1592fb-fd51-4a05-ac77-6094fe72263b/volumes" Feb 26 12:06:39 crc kubenswrapper[4699]: I0226 12:06:39.381405 4699 scope.go:117] "RemoveContainer" containerID="133bac2de294eabd3d63693bc2552e8927f3fa0a60ee9ff7dd1f74c8eac8b98e" Feb 26 12:06:39 crc kubenswrapper[4699]: I0226 12:06:39.756773 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:39 crc kubenswrapper[4699]: I0226 12:06:39.756896 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:39 crc kubenswrapper[4699]: I0226 12:06:39.805326 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:39 crc kubenswrapper[4699]: I0226 12:06:39.988800 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:40 crc kubenswrapper[4699]: I0226 12:06:40.047533 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p28sp"] Feb 26 12:06:41 crc kubenswrapper[4699]: I0226 12:06:41.946097 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p28sp" podUID="e885b191-c6b8-4780-bc61-eaae0d82ad32" containerName="registry-server" containerID="cri-o://ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb" gracePeriod=2 Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.458195 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.599665 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-catalog-content\") pod \"e885b191-c6b8-4780-bc61-eaae0d82ad32\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.599809 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsd4j\" (UniqueName: \"kubernetes.io/projected/e885b191-c6b8-4780-bc61-eaae0d82ad32-kube-api-access-jsd4j\") pod \"e885b191-c6b8-4780-bc61-eaae0d82ad32\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.600081 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-utilities\") pod \"e885b191-c6b8-4780-bc61-eaae0d82ad32\" (UID: \"e885b191-c6b8-4780-bc61-eaae0d82ad32\") " Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.601448 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-utilities" (OuterVolumeSpecName: "utilities") pod "e885b191-c6b8-4780-bc61-eaae0d82ad32" (UID: "e885b191-c6b8-4780-bc61-eaae0d82ad32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.607478 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e885b191-c6b8-4780-bc61-eaae0d82ad32-kube-api-access-jsd4j" (OuterVolumeSpecName: "kube-api-access-jsd4j") pod "e885b191-c6b8-4780-bc61-eaae0d82ad32" (UID: "e885b191-c6b8-4780-bc61-eaae0d82ad32"). InnerVolumeSpecName "kube-api-access-jsd4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.625176 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e885b191-c6b8-4780-bc61-eaae0d82ad32" (UID: "e885b191-c6b8-4780-bc61-eaae0d82ad32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.702102 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsd4j\" (UniqueName: \"kubernetes.io/projected/e885b191-c6b8-4780-bc61-eaae0d82ad32-kube-api-access-jsd4j\") on node \"crc\" DevicePath \"\"" Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.702330 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.702405 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e885b191-c6b8-4780-bc61-eaae0d82ad32-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.958703 4699 generic.go:334] "Generic (PLEG): container finished" podID="e885b191-c6b8-4780-bc61-eaae0d82ad32" containerID="ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb" exitCode=0 Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.958787 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p28sp" Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.958805 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p28sp" event={"ID":"e885b191-c6b8-4780-bc61-eaae0d82ad32","Type":"ContainerDied","Data":"ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb"} Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.960169 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p28sp" event={"ID":"e885b191-c6b8-4780-bc61-eaae0d82ad32","Type":"ContainerDied","Data":"909ae1a31f3b190a5c879bc6499a8426947ec5b29159b933f594b746178602d1"} Feb 26 12:06:42 crc kubenswrapper[4699]: I0226 12:06:42.960199 4699 scope.go:117] "RemoveContainer" containerID="ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb" Feb 26 12:06:43 crc kubenswrapper[4699]: I0226 12:06:43.009179 4699 scope.go:117] "RemoveContainer" containerID="dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75" Feb 26 12:06:43 crc kubenswrapper[4699]: I0226 12:06:43.033270 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p28sp"] Feb 26 12:06:43 crc kubenswrapper[4699]: I0226 12:06:43.039894 4699 scope.go:117] "RemoveContainer" containerID="8160ec562f5b448ada6b4d935ee7e1221ceccd01d09aa2645648d7aa891ef261" Feb 26 12:06:43 crc kubenswrapper[4699]: I0226 12:06:43.068470 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p28sp"] Feb 26 12:06:43 crc kubenswrapper[4699]: I0226 12:06:43.117452 4699 scope.go:117] "RemoveContainer" containerID="ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb" Feb 26 12:06:43 crc kubenswrapper[4699]: E0226 12:06:43.120621 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb\": container with ID starting with ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb not found: ID does not exist" containerID="ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb" Feb 26 12:06:43 crc kubenswrapper[4699]: I0226 12:06:43.120675 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb"} err="failed to get container status \"ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb\": rpc error: code = NotFound desc = could not find container \"ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb\": container with ID starting with ca3b9ef1bb6b5ba34f2b3ea120521b34766663e638086b148598e9ebcbfb2efb not found: ID does not exist" Feb 26 12:06:43 crc kubenswrapper[4699]: I0226 12:06:43.120705 4699 scope.go:117] "RemoveContainer" containerID="dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75" Feb 26 12:06:43 crc kubenswrapper[4699]: E0226 12:06:43.124503 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75\": container with ID starting with dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75 not found: ID does not exist" containerID="dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75" Feb 26 12:06:43 crc kubenswrapper[4699]: I0226 12:06:43.124568 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75"} err="failed to get container status \"dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75\": rpc error: code = NotFound desc = could not find container \"dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75\": container with ID starting with dd638da9b07bb5495355fbba9db461caf931aa048c85356f172f22e39da72d75 not found: ID does not exist" Feb 26 12:06:43 crc kubenswrapper[4699]: I0226 12:06:43.124606 4699 scope.go:117] "RemoveContainer" containerID="8160ec562f5b448ada6b4d935ee7e1221ceccd01d09aa2645648d7aa891ef261" Feb 26 12:06:43 crc kubenswrapper[4699]: E0226 12:06:43.128389 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8160ec562f5b448ada6b4d935ee7e1221ceccd01d09aa2645648d7aa891ef261\": container with ID starting with 8160ec562f5b448ada6b4d935ee7e1221ceccd01d09aa2645648d7aa891ef261 not found: ID does not exist" containerID="8160ec562f5b448ada6b4d935ee7e1221ceccd01d09aa2645648d7aa891ef261" Feb 26 12:06:43 crc kubenswrapper[4699]: I0226 12:06:43.128429 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8160ec562f5b448ada6b4d935ee7e1221ceccd01d09aa2645648d7aa891ef261"} err="failed to get container status \"8160ec562f5b448ada6b4d935ee7e1221ceccd01d09aa2645648d7aa891ef261\": rpc error: code = NotFound desc = could not find container \"8160ec562f5b448ada6b4d935ee7e1221ceccd01d09aa2645648d7aa891ef261\": container with ID starting with 8160ec562f5b448ada6b4d935ee7e1221ceccd01d09aa2645648d7aa891ef261 not found: ID does not exist" Feb 26 12:06:44 crc kubenswrapper[4699]: I0226 12:06:44.272540 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e885b191-c6b8-4780-bc61-eaae0d82ad32" path="/var/lib/kubelet/pods/e885b191-c6b8-4780-bc61-eaae0d82ad32/volumes" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.142653 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535128-4vkb4"] Feb 26 12:08:00 crc kubenswrapper[4699]: E0226 12:08:00.143588 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1592fb-fd51-4a05-ac77-6094fe72263b" containerName="extract-content" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.143603 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1592fb-fd51-4a05-ac77-6094fe72263b" containerName="extract-content" Feb 26 12:08:00 crc kubenswrapper[4699]: E0226 12:08:00.143616 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1592fb-fd51-4a05-ac77-6094fe72263b" containerName="registry-server" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.143622 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1592fb-fd51-4a05-ac77-6094fe72263b" containerName="registry-server" Feb 26 12:08:00 crc kubenswrapper[4699]: E0226 12:08:00.143652 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e885b191-c6b8-4780-bc61-eaae0d82ad32" containerName="extract-utilities" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.143660 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e885b191-c6b8-4780-bc61-eaae0d82ad32" containerName="extract-utilities" Feb 26 12:08:00 crc kubenswrapper[4699]: E0226 12:08:00.143672 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e885b191-c6b8-4780-bc61-eaae0d82ad32" containerName="extract-content" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.143678 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e885b191-c6b8-4780-bc61-eaae0d82ad32" containerName="extract-content" Feb 26 12:08:00 crc kubenswrapper[4699]: E0226 12:08:00.143689 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1592fb-fd51-4a05-ac77-6094fe72263b" containerName="extract-utilities" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.143695 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1592fb-fd51-4a05-ac77-6094fe72263b" containerName="extract-utilities" Feb 26 12:08:00 crc kubenswrapper[4699]: E0226 12:08:00.143707 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e885b191-c6b8-4780-bc61-eaae0d82ad32" containerName="registry-server" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.143713 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e885b191-c6b8-4780-bc61-eaae0d82ad32" containerName="registry-server" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.143886 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e885b191-c6b8-4780-bc61-eaae0d82ad32" containerName="registry-server" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.143901 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d1592fb-fd51-4a05-ac77-6094fe72263b" containerName="registry-server" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.144586 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535128-4vkb4" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.147309 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.149054 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.149264 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.151483 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535128-4vkb4"] Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.196713 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wjp\" (UniqueName: \"kubernetes.io/projected/0e519986-41ca-4360-b9bd-14a485e9a635-kube-api-access-s8wjp\") pod \"auto-csr-approver-29535128-4vkb4\" (UID: \"0e519986-41ca-4360-b9bd-14a485e9a635\") " pod="openshift-infra/auto-csr-approver-29535128-4vkb4" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.298657 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8wjp\" (UniqueName: \"kubernetes.io/projected/0e519986-41ca-4360-b9bd-14a485e9a635-kube-api-access-s8wjp\") pod \"auto-csr-approver-29535128-4vkb4\" (UID: \"0e519986-41ca-4360-b9bd-14a485e9a635\") " pod="openshift-infra/auto-csr-approver-29535128-4vkb4" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.320884 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8wjp\" (UniqueName: \"kubernetes.io/projected/0e519986-41ca-4360-b9bd-14a485e9a635-kube-api-access-s8wjp\") pod \"auto-csr-approver-29535128-4vkb4\" (UID: \"0e519986-41ca-4360-b9bd-14a485e9a635\") " pod="openshift-infra/auto-csr-approver-29535128-4vkb4" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.467652 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535128-4vkb4" Feb 26 12:08:00 crc kubenswrapper[4699]: I0226 12:08:00.891867 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535128-4vkb4"] Feb 26 12:08:01 crc kubenswrapper[4699]: I0226 12:08:01.688689 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535128-4vkb4" event={"ID":"0e519986-41ca-4360-b9bd-14a485e9a635","Type":"ContainerStarted","Data":"a2c85cba99d29d47dea8e6e3a3d1b5671f747058fa2193a1bb1b8f06acfab4fd"} Feb 26 12:08:02 crc kubenswrapper[4699]: I0226 12:08:02.698807 4699 generic.go:334] "Generic (PLEG): container finished" podID="0e519986-41ca-4360-b9bd-14a485e9a635" containerID="edcce5d1b2431ea73d4d1a16900e65c51edf48fce3e10f865133733ba98e31ff" exitCode=0 Feb 26 12:08:02 crc kubenswrapper[4699]: I0226 12:08:02.698867 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535128-4vkb4" event={"ID":"0e519986-41ca-4360-b9bd-14a485e9a635","Type":"ContainerDied","Data":"edcce5d1b2431ea73d4d1a16900e65c51edf48fce3e10f865133733ba98e31ff"} Feb 26 12:08:04 crc kubenswrapper[4699]: I0226 12:08:04.171785 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535128-4vkb4" Feb 26 12:08:04 crc kubenswrapper[4699]: I0226 12:08:04.273100 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8wjp\" (UniqueName: \"kubernetes.io/projected/0e519986-41ca-4360-b9bd-14a485e9a635-kube-api-access-s8wjp\") pod \"0e519986-41ca-4360-b9bd-14a485e9a635\" (UID: \"0e519986-41ca-4360-b9bd-14a485e9a635\") " Feb 26 12:08:04 crc kubenswrapper[4699]: I0226 12:08:04.279013 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e519986-41ca-4360-b9bd-14a485e9a635-kube-api-access-s8wjp" (OuterVolumeSpecName: "kube-api-access-s8wjp") pod "0e519986-41ca-4360-b9bd-14a485e9a635" (UID: "0e519986-41ca-4360-b9bd-14a485e9a635"). InnerVolumeSpecName "kube-api-access-s8wjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:08:04 crc kubenswrapper[4699]: I0226 12:08:04.378417 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8wjp\" (UniqueName: \"kubernetes.io/projected/0e519986-41ca-4360-b9bd-14a485e9a635-kube-api-access-s8wjp\") on node \"crc\" DevicePath \"\"" Feb 26 12:08:04 crc kubenswrapper[4699]: I0226 12:08:04.719979 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535128-4vkb4" event={"ID":"0e519986-41ca-4360-b9bd-14a485e9a635","Type":"ContainerDied","Data":"a2c85cba99d29d47dea8e6e3a3d1b5671f747058fa2193a1bb1b8f06acfab4fd"} Feb 26 12:08:04 crc kubenswrapper[4699]: I0226 12:08:04.720238 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c85cba99d29d47dea8e6e3a3d1b5671f747058fa2193a1bb1b8f06acfab4fd" Feb 26 12:08:04 crc kubenswrapper[4699]: I0226 12:08:04.720044 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535128-4vkb4" Feb 26 12:08:05 crc kubenswrapper[4699]: I0226 12:08:05.253177 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535122-xp67b"] Feb 26 12:08:05 crc kubenswrapper[4699]: I0226 12:08:05.261251 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535122-xp67b"] Feb 26 12:08:06 crc kubenswrapper[4699]: I0226 12:08:06.272730 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a271ab-4d30-4863-b3f6-74750cc65a91" path="/var/lib/kubelet/pods/27a271ab-4d30-4863-b3f6-74750cc65a91/volumes" Feb 26 12:08:11 crc kubenswrapper[4699]: I0226 12:08:11.585174 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:08:11 crc kubenswrapper[4699]: I0226 12:08:11.585734 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.218834 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j9cvm"] Feb 26 12:08:30 crc kubenswrapper[4699]: E0226 12:08:30.219974 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e519986-41ca-4360-b9bd-14a485e9a635" containerName="oc" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.219990 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e519986-41ca-4360-b9bd-14a485e9a635" containerName="oc" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.220333 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e519986-41ca-4360-b9bd-14a485e9a635" containerName="oc" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.222159 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.237157 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9cvm"] Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.295010 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-catalog-content\") pod \"redhat-operators-j9cvm\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.295081 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-utilities\") pod \"redhat-operators-j9cvm\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.295137 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5hsq\" (UniqueName: \"kubernetes.io/projected/60aaedcf-19db-44db-848e-bc1c1f21bf5e-kube-api-access-c5hsq\") pod \"redhat-operators-j9cvm\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.397976 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-catalog-content\") pod \"redhat-operators-j9cvm\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.398084 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-utilities\") pod \"redhat-operators-j9cvm\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.398146 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5hsq\" (UniqueName: \"kubernetes.io/projected/60aaedcf-19db-44db-848e-bc1c1f21bf5e-kube-api-access-c5hsq\") pod \"redhat-operators-j9cvm\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.399753 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-utilities\") pod \"redhat-operators-j9cvm\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.399927 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-catalog-content\") pod \"redhat-operators-j9cvm\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.428363 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5hsq\" (UniqueName: \"kubernetes.io/projected/60aaedcf-19db-44db-848e-bc1c1f21bf5e-kube-api-access-c5hsq\") pod \"redhat-operators-j9cvm\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:30 crc kubenswrapper[4699]: I0226 12:08:30.546166 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:31 crc kubenswrapper[4699]: I0226 12:08:31.034711 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9cvm"] Feb 26 12:08:31 crc kubenswrapper[4699]: I0226 12:08:31.962702 4699 generic.go:334] "Generic (PLEG): container finished" podID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" containerID="52cffe73c6ff4eeb0777e02dbbe4e74948138030e8e11c482a1bb38359f089c9" exitCode=0 Feb 26 12:08:31 crc kubenswrapper[4699]: I0226 12:08:31.962821 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9cvm" event={"ID":"60aaedcf-19db-44db-848e-bc1c1f21bf5e","Type":"ContainerDied","Data":"52cffe73c6ff4eeb0777e02dbbe4e74948138030e8e11c482a1bb38359f089c9"} Feb 26 12:08:31 crc kubenswrapper[4699]: I0226 12:08:31.963109 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9cvm" event={"ID":"60aaedcf-19db-44db-848e-bc1c1f21bf5e","Type":"ContainerStarted","Data":"34c5da8ce72a6b46d599d468d487b9c2112b612ace915f6720aa7df89d07fd4d"} Feb 26 12:08:35 crc kubenswrapper[4699]: I0226 12:08:35.993796 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9cvm" event={"ID":"60aaedcf-19db-44db-848e-bc1c1f21bf5e","Type":"ContainerStarted","Data":"99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417"} Feb 26 12:08:39 crc kubenswrapper[4699]: I0226 12:08:39.516026 4699 scope.go:117] "RemoveContainer" containerID="f8af8d4fb65b858c79bdd65cde626e347dc9e20fd0df6dcb1821aae0c9ee9b41" Feb 26 12:08:41 crc kubenswrapper[4699]: I0226 12:08:41.585150 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:08:41 crc kubenswrapper[4699]: I0226 12:08:41.585496 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:08:42 crc kubenswrapper[4699]: I0226 12:08:42.052939 4699 generic.go:334] "Generic (PLEG): container finished" podID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" containerID="99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417" exitCode=0 Feb 26 12:08:42 crc kubenswrapper[4699]: I0226 12:08:42.052998 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9cvm" event={"ID":"60aaedcf-19db-44db-848e-bc1c1f21bf5e","Type":"ContainerDied","Data":"99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417"} Feb 26 12:08:42 crc kubenswrapper[4699]: I0226 12:08:42.056003 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:08:43 crc kubenswrapper[4699]: I0226 12:08:43.065425 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9cvm" event={"ID":"60aaedcf-19db-44db-848e-bc1c1f21bf5e","Type":"ContainerStarted","Data":"b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705"} Feb 26 12:08:43 crc kubenswrapper[4699]: I0226 12:08:43.090944 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j9cvm" podStartSLOduration=2.642229999 podStartE2EDuration="13.090914088s" podCreationTimestamp="2026-02-26 12:08:30 +0000 UTC" firstStartedPulling="2026-02-26 12:08:31.965020508 +0000 UTC m=+3457.775846942" lastFinishedPulling="2026-02-26 12:08:42.413704577 +0000 UTC m=+3468.224531031" observedRunningTime="2026-02-26 12:08:43.083439016 +0000 UTC m=+3468.894265460" watchObservedRunningTime="2026-02-26 12:08:43.090914088 +0000 UTC m=+3468.901740532" Feb 26 12:08:50 crc kubenswrapper[4699]: I0226 12:08:50.547321 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:50 crc kubenswrapper[4699]: I0226 12:08:50.547882 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:50 crc kubenswrapper[4699]: I0226 12:08:50.598130 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:51 crc kubenswrapper[4699]: I0226 12:08:51.189966 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:51 crc kubenswrapper[4699]: I0226 12:08:51.235284 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9cvm"] Feb 26 12:08:53 crc kubenswrapper[4699]: I0226 12:08:53.152062 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j9cvm" podUID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" containerName="registry-server" containerID="cri-o://b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705" gracePeriod=2 Feb 26 12:08:53 crc kubenswrapper[4699]: I0226 12:08:53.614973 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:53 crc kubenswrapper[4699]: I0226 12:08:53.702580 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-utilities\") pod \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " Feb 26 12:08:53 crc kubenswrapper[4699]: I0226 12:08:53.702754 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5hsq\" (UniqueName: \"kubernetes.io/projected/60aaedcf-19db-44db-848e-bc1c1f21bf5e-kube-api-access-c5hsq\") pod \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " Feb 26 12:08:53 crc kubenswrapper[4699]: I0226 12:08:53.702904 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-catalog-content\") pod \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\" (UID: \"60aaedcf-19db-44db-848e-bc1c1f21bf5e\") " Feb 26 12:08:53 crc kubenswrapper[4699]: I0226 12:08:53.703245 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-utilities" (OuterVolumeSpecName: "utilities") pod "60aaedcf-19db-44db-848e-bc1c1f21bf5e" (UID: "60aaedcf-19db-44db-848e-bc1c1f21bf5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:08:53 crc kubenswrapper[4699]: I0226 12:08:53.703614 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:08:53 crc kubenswrapper[4699]: I0226 12:08:53.708549 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60aaedcf-19db-44db-848e-bc1c1f21bf5e-kube-api-access-c5hsq" (OuterVolumeSpecName: "kube-api-access-c5hsq") pod "60aaedcf-19db-44db-848e-bc1c1f21bf5e" (UID: "60aaedcf-19db-44db-848e-bc1c1f21bf5e"). InnerVolumeSpecName "kube-api-access-c5hsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:08:53 crc kubenswrapper[4699]: I0226 12:08:53.805273 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5hsq\" (UniqueName: \"kubernetes.io/projected/60aaedcf-19db-44db-848e-bc1c1f21bf5e-kube-api-access-c5hsq\") on node \"crc\" DevicePath \"\"" Feb 26 12:08:53 crc kubenswrapper[4699]: I0226 12:08:53.828186 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60aaedcf-19db-44db-848e-bc1c1f21bf5e" (UID: "60aaedcf-19db-44db-848e-bc1c1f21bf5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:08:53 crc kubenswrapper[4699]: I0226 12:08:53.907053 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60aaedcf-19db-44db-848e-bc1c1f21bf5e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.163633 4699 generic.go:334] "Generic (PLEG): container finished" podID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" containerID="b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705" exitCode=0 Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.163678 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9cvm" Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.163687 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9cvm" event={"ID":"60aaedcf-19db-44db-848e-bc1c1f21bf5e","Type":"ContainerDied","Data":"b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705"} Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.163733 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9cvm" event={"ID":"60aaedcf-19db-44db-848e-bc1c1f21bf5e","Type":"ContainerDied","Data":"34c5da8ce72a6b46d599d468d487b9c2112b612ace915f6720aa7df89d07fd4d"} Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.163755 4699 scope.go:117] "RemoveContainer" containerID="b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705" Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.190017 4699 scope.go:117] "RemoveContainer" containerID="99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417" Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.197588 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9cvm"] Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.207383 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j9cvm"] Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.224954 4699 scope.go:117] "RemoveContainer" containerID="52cffe73c6ff4eeb0777e02dbbe4e74948138030e8e11c482a1bb38359f089c9" Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.252320 4699 scope.go:117] "RemoveContainer" containerID="b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705" Feb 26 12:08:54 crc kubenswrapper[4699]: E0226 12:08:54.253086 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705\": container with ID starting with b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705 not found: ID does not exist" containerID="b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705" Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.253211 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705"} err="failed to get container status \"b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705\": rpc error: code = NotFound desc = could not find container \"b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705\": container with ID starting with b36ccaaf0a6bf8d5b84ac0b134b909bb1978a209c041449567b49679e51dd705 not found: ID does not exist" Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.253248 4699 scope.go:117] "RemoveContainer" containerID="99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417" Feb 26 12:08:54 crc kubenswrapper[4699]: E0226 12:08:54.253582 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417\": container with ID starting with 99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417 not found: ID does not exist" containerID="99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417" Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.253612 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417"} err="failed to get container status \"99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417\": rpc error: code = NotFound desc = could not find container \"99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417\": container with ID starting with 99b42523d61f3676353772bdbec5886c84aaf0dac806fa9435ac0bae9259e417 not found: ID does not exist" Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.253644 4699 scope.go:117] "RemoveContainer" containerID="52cffe73c6ff4eeb0777e02dbbe4e74948138030e8e11c482a1bb38359f089c9" Feb 26 12:08:54 crc kubenswrapper[4699]: E0226 12:08:54.253915 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52cffe73c6ff4eeb0777e02dbbe4e74948138030e8e11c482a1bb38359f089c9\": container with ID starting with 52cffe73c6ff4eeb0777e02dbbe4e74948138030e8e11c482a1bb38359f089c9 not found: ID does not exist" containerID="52cffe73c6ff4eeb0777e02dbbe4e74948138030e8e11c482a1bb38359f089c9" Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.253960 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52cffe73c6ff4eeb0777e02dbbe4e74948138030e8e11c482a1bb38359f089c9"} err="failed to get container status \"52cffe73c6ff4eeb0777e02dbbe4e74948138030e8e11c482a1bb38359f089c9\": rpc error: code = NotFound desc = could not find container \"52cffe73c6ff4eeb0777e02dbbe4e74948138030e8e11c482a1bb38359f089c9\": container with ID starting with 52cffe73c6ff4eeb0777e02dbbe4e74948138030e8e11c482a1bb38359f089c9 not found: ID does not exist" Feb 26 12:08:54 crc kubenswrapper[4699]: I0226 12:08:54.271480 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" path="/var/lib/kubelet/pods/60aaedcf-19db-44db-848e-bc1c1f21bf5e/volumes" Feb 26 12:09:11 crc kubenswrapper[4699]: I0226 12:09:11.585194 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:09:11 crc kubenswrapper[4699]: I0226 12:09:11.585727 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:09:11 crc kubenswrapper[4699]: I0226 12:09:11.585787 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 12:09:11 crc kubenswrapper[4699]: I0226 12:09:11.586585 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"243333360a6594f8acbe71e1e9197448e74ac1a0258671779fb6af974ca032dd"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:09:11 crc kubenswrapper[4699]: I0226 12:09:11.586640 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://243333360a6594f8acbe71e1e9197448e74ac1a0258671779fb6af974ca032dd" gracePeriod=600 Feb 26 12:09:12 crc kubenswrapper[4699]: I0226 12:09:12.318256 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="243333360a6594f8acbe71e1e9197448e74ac1a0258671779fb6af974ca032dd" exitCode=0 Feb 26 12:09:12 crc kubenswrapper[4699]: I0226 12:09:12.318341 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"243333360a6594f8acbe71e1e9197448e74ac1a0258671779fb6af974ca032dd"} Feb 26 12:09:12 crc kubenswrapper[4699]: I0226 12:09:12.318560 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7"} Feb 26 12:09:12 crc kubenswrapper[4699]: I0226 12:09:12.318580 4699 scope.go:117] "RemoveContainer" containerID="2f17994adbe3e6b82be8b555f7785740546350970bac8074db0cb6adbacaf948" Feb 26 12:09:36 crc kubenswrapper[4699]: I0226 12:09:36.569690 4699 generic.go:334] "Generic (PLEG): container finished" podID="19e02200-91be-49f8-8174-4a0bf6cda9dd" containerID="084f210d6c46d1c100bf0bcfdc7ffd17238944ee1beffdf271d0e8035c249561" exitCode=0 Feb 26 12:09:36 crc kubenswrapper[4699]: I0226 12:09:36.569773 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"19e02200-91be-49f8-8174-4a0bf6cda9dd","Type":"ContainerDied","Data":"084f210d6c46d1c100bf0bcfdc7ffd17238944ee1beffdf271d0e8035c249561"} Feb 26 12:09:37 crc kubenswrapper[4699]: I0226 12:09:37.990703 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.040541 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw89z\" (UniqueName: \"kubernetes.io/projected/19e02200-91be-49f8-8174-4a0bf6cda9dd-kube-api-access-qw89z\") pod \"19e02200-91be-49f8-8174-4a0bf6cda9dd\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.040625 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"19e02200-91be-49f8-8174-4a0bf6cda9dd\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.040677 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config-secret\") pod \"19e02200-91be-49f8-8174-4a0bf6cda9dd\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.040702 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-config-data\") pod \"19e02200-91be-49f8-8174-4a0bf6cda9dd\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.040776 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-workdir\") pod \"19e02200-91be-49f8-8174-4a0bf6cda9dd\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.040836 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config\") pod \"19e02200-91be-49f8-8174-4a0bf6cda9dd\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.040871 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-temporary\") pod \"19e02200-91be-49f8-8174-4a0bf6cda9dd\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.040896 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ca-certs\") pod \"19e02200-91be-49f8-8174-4a0bf6cda9dd\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.040937 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ssh-key\") pod \"19e02200-91be-49f8-8174-4a0bf6cda9dd\" (UID: \"19e02200-91be-49f8-8174-4a0bf6cda9dd\") " Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.041366 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "19e02200-91be-49f8-8174-4a0bf6cda9dd" (UID: "19e02200-91be-49f8-8174-4a0bf6cda9dd"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.041632 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-config-data" (OuterVolumeSpecName: "config-data") pod "19e02200-91be-49f8-8174-4a0bf6cda9dd" (UID: "19e02200-91be-49f8-8174-4a0bf6cda9dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.041860 4699 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.041879 4699 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.045737 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "19e02200-91be-49f8-8174-4a0bf6cda9dd" (UID: "19e02200-91be-49f8-8174-4a0bf6cda9dd"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.046092 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "19e02200-91be-49f8-8174-4a0bf6cda9dd" (UID: "19e02200-91be-49f8-8174-4a0bf6cda9dd"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.046424 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e02200-91be-49f8-8174-4a0bf6cda9dd-kube-api-access-qw89z" (OuterVolumeSpecName: "kube-api-access-qw89z") pod "19e02200-91be-49f8-8174-4a0bf6cda9dd" (UID: "19e02200-91be-49f8-8174-4a0bf6cda9dd"). InnerVolumeSpecName "kube-api-access-qw89z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.074795 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "19e02200-91be-49f8-8174-4a0bf6cda9dd" (UID: "19e02200-91be-49f8-8174-4a0bf6cda9dd"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.076229 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "19e02200-91be-49f8-8174-4a0bf6cda9dd" (UID: "19e02200-91be-49f8-8174-4a0bf6cda9dd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.088640 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "19e02200-91be-49f8-8174-4a0bf6cda9dd" (UID: "19e02200-91be-49f8-8174-4a0bf6cda9dd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.100695 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "19e02200-91be-49f8-8174-4a0bf6cda9dd" (UID: "19e02200-91be-49f8-8174-4a0bf6cda9dd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.144304 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw89z\" (UniqueName: \"kubernetes.io/projected/19e02200-91be-49f8-8174-4a0bf6cda9dd-kube-api-access-qw89z\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.144386 4699 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.144399 4699 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.144409 4699 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/19e02200-91be-49f8-8174-4a0bf6cda9dd-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.144420 4699 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19e02200-91be-49f8-8174-4a0bf6cda9dd-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.144428 4699 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.144437 4699 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19e02200-91be-49f8-8174-4a0bf6cda9dd-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.172077 4699 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.246512 4699 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.588409 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"19e02200-91be-49f8-8174-4a0bf6cda9dd","Type":"ContainerDied","Data":"0f178f25ec5476c2b73a67092a0049cc1be8c1984e676d6f03c82e6dac970a0f"} Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.588462 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f178f25ec5476c2b73a67092a0049cc1be8c1984e676d6f03c82e6dac970a0f" Feb 26 12:09:38 crc kubenswrapper[4699]: I0226 12:09:38.588513 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.587136 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 12:09:41 crc kubenswrapper[4699]: E0226 12:09:41.587932 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" containerName="registry-server" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.587946 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" containerName="registry-server" Feb 26 12:09:41 crc kubenswrapper[4699]: E0226 12:09:41.587967 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" containerName="extract-content" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.587973 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" containerName="extract-content" Feb 26 12:09:41 crc kubenswrapper[4699]: E0226 12:09:41.588000 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e02200-91be-49f8-8174-4a0bf6cda9dd" containerName="tempest-tests-tempest-tests-runner" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.588007 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e02200-91be-49f8-8174-4a0bf6cda9dd" containerName="tempest-tests-tempest-tests-runner" Feb 26 12:09:41 crc kubenswrapper[4699]: E0226 12:09:41.588021 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" containerName="extract-utilities" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.588027 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" containerName="extract-utilities" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.588233 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e02200-91be-49f8-8174-4a0bf6cda9dd" containerName="tempest-tests-tempest-tests-runner" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.588247 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="60aaedcf-19db-44db-848e-bc1c1f21bf5e" containerName="registry-server" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.588974 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.592352 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fmwlb" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.598663 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.614348 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"66beadbe-fd5d-48af-8a33-8a652c8d1c71\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.614688 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6jw\" (UniqueName: \"kubernetes.io/projected/66beadbe-fd5d-48af-8a33-8a652c8d1c71-kube-api-access-cg6jw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"66beadbe-fd5d-48af-8a33-8a652c8d1c71\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.716816 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg6jw\" (UniqueName: \"kubernetes.io/projected/66beadbe-fd5d-48af-8a33-8a652c8d1c71-kube-api-access-cg6jw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"66beadbe-fd5d-48af-8a33-8a652c8d1c71\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.717011 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"66beadbe-fd5d-48af-8a33-8a652c8d1c71\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.717775 4699 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"66beadbe-fd5d-48af-8a33-8a652c8d1c71\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.746839 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg6jw\" (UniqueName: \"kubernetes.io/projected/66beadbe-fd5d-48af-8a33-8a652c8d1c71-kube-api-access-cg6jw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"66beadbe-fd5d-48af-8a33-8a652c8d1c71\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.757396 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"66beadbe-fd5d-48af-8a33-8a652c8d1c71\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 12:09:41 crc kubenswrapper[4699]: I0226 12:09:41.922923 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 12:09:42 crc kubenswrapper[4699]: I0226 12:09:42.415560 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 12:09:42 crc kubenswrapper[4699]: I0226 12:09:42.627770 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"66beadbe-fd5d-48af-8a33-8a652c8d1c71","Type":"ContainerStarted","Data":"847da687299bc9288ded47021eee31d66d769ac8d475d3f5fb40ed971a8106f0"} Feb 26 12:09:43 crc kubenswrapper[4699]: I0226 12:09:43.638939 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"66beadbe-fd5d-48af-8a33-8a652c8d1c71","Type":"ContainerStarted","Data":"53d8b61522ead810a906b8e7f70dd6e20b8faf411b8a954290488492e292ef7b"} Feb 26 12:09:43 crc kubenswrapper[4699]: I0226 12:09:43.655503 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.765777923 podStartE2EDuration="2.655477524s" podCreationTimestamp="2026-02-26 12:09:41 +0000 UTC" firstStartedPulling="2026-02-26 12:09:42.417886399 +0000 UTC m=+3528.228712833" lastFinishedPulling="2026-02-26 12:09:43.30758596 +0000 UTC m=+3529.118412434" observedRunningTime="2026-02-26 12:09:43.650642577 +0000 UTC m=+3529.461469021" watchObservedRunningTime="2026-02-26 12:09:43.655477524 +0000 UTC m=+3529.466303958" Feb 26 12:10:00 crc kubenswrapper[4699]: I0226 12:10:00.163873 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535130-v52gt"] Feb 26 12:10:00 crc kubenswrapper[4699]: I0226 12:10:00.167063 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535130-v52gt" Feb 26 12:10:00 crc kubenswrapper[4699]: I0226 12:10:00.172072 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:10:00 crc kubenswrapper[4699]: I0226 12:10:00.172552 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:10:00 crc kubenswrapper[4699]: I0226 12:10:00.173062 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:10:00 crc kubenswrapper[4699]: I0226 12:10:00.178062 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535130-v52gt"] Feb 26 12:10:00 crc kubenswrapper[4699]: I0226 12:10:00.297622 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnt8x\" (UniqueName: \"kubernetes.io/projected/524c38c5-5560-45a6-aa15-3010000b2165-kube-api-access-lnt8x\") pod \"auto-csr-approver-29535130-v52gt\" (UID: \"524c38c5-5560-45a6-aa15-3010000b2165\") " pod="openshift-infra/auto-csr-approver-29535130-v52gt" Feb 26 12:10:00 crc kubenswrapper[4699]: I0226 12:10:00.414698 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnt8x\" (UniqueName: \"kubernetes.io/projected/524c38c5-5560-45a6-aa15-3010000b2165-kube-api-access-lnt8x\") pod \"auto-csr-approver-29535130-v52gt\" (UID: \"524c38c5-5560-45a6-aa15-3010000b2165\") " pod="openshift-infra/auto-csr-approver-29535130-v52gt" Feb 26 12:10:00 crc kubenswrapper[4699]: I0226 12:10:00.436766 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnt8x\" (UniqueName: \"kubernetes.io/projected/524c38c5-5560-45a6-aa15-3010000b2165-kube-api-access-lnt8x\") pod \"auto-csr-approver-29535130-v52gt\" (UID: \"524c38c5-5560-45a6-aa15-3010000b2165\") " pod="openshift-infra/auto-csr-approver-29535130-v52gt" Feb 26 12:10:00 crc kubenswrapper[4699]: I0226 12:10:00.494375 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535130-v52gt" Feb 26 12:10:00 crc kubenswrapper[4699]: I0226 12:10:00.931692 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535130-v52gt"] Feb 26 12:10:01 crc kubenswrapper[4699]: I0226 12:10:01.796975 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535130-v52gt" event={"ID":"524c38c5-5560-45a6-aa15-3010000b2165","Type":"ContainerStarted","Data":"95b121cd73f6696c5e9b893eea048f56496a29dd23330d6b28609095b8186a5a"} Feb 26 12:10:02 crc kubenswrapper[4699]: E0226 12:10:02.640443 4699 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod524c38c5_5560_45a6_aa15_3010000b2165.slice/crio-b1e1f8248ccd17084f1b3aa21ad1265018f7368ffdb4ddbf286721c65474aad5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod524c38c5_5560_45a6_aa15_3010000b2165.slice/crio-conmon-b1e1f8248ccd17084f1b3aa21ad1265018f7368ffdb4ddbf286721c65474aad5.scope\": RecentStats: unable to find data in memory cache]" Feb 26 12:10:02 crc kubenswrapper[4699]: I0226 12:10:02.807277 4699 generic.go:334] "Generic (PLEG): container finished" podID="524c38c5-5560-45a6-aa15-3010000b2165" containerID="b1e1f8248ccd17084f1b3aa21ad1265018f7368ffdb4ddbf286721c65474aad5" exitCode=0 Feb 26 12:10:02 crc kubenswrapper[4699]: I0226 12:10:02.807324 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535130-v52gt" event={"ID":"524c38c5-5560-45a6-aa15-3010000b2165","Type":"ContainerDied","Data":"b1e1f8248ccd17084f1b3aa21ad1265018f7368ffdb4ddbf286721c65474aad5"} Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.175915 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535130-v52gt" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.192585 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnt8x\" (UniqueName: \"kubernetes.io/projected/524c38c5-5560-45a6-aa15-3010000b2165-kube-api-access-lnt8x\") pod \"524c38c5-5560-45a6-aa15-3010000b2165\" (UID: \"524c38c5-5560-45a6-aa15-3010000b2165\") " Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.199775 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524c38c5-5560-45a6-aa15-3010000b2165-kube-api-access-lnt8x" (OuterVolumeSpecName: "kube-api-access-lnt8x") pod "524c38c5-5560-45a6-aa15-3010000b2165" (UID: "524c38c5-5560-45a6-aa15-3010000b2165"). InnerVolumeSpecName "kube-api-access-lnt8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.294447 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnt8x\" (UniqueName: \"kubernetes.io/projected/524c38c5-5560-45a6-aa15-3010000b2165-kube-api-access-lnt8x\") on node \"crc\" DevicePath \"\"" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.830387 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535130-v52gt" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.830376 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535130-v52gt" event={"ID":"524c38c5-5560-45a6-aa15-3010000b2165","Type":"ContainerDied","Data":"95b121cd73f6696c5e9b893eea048f56496a29dd23330d6b28609095b8186a5a"} Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.830601 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95b121cd73f6696c5e9b893eea048f56496a29dd23330d6b28609095b8186a5a" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.911189 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4plh5/must-gather-cwqbr"] Feb 26 12:10:04 crc kubenswrapper[4699]: E0226 12:10:04.911723 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524c38c5-5560-45a6-aa15-3010000b2165" containerName="oc" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.911746 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="524c38c5-5560-45a6-aa15-3010000b2165" containerName="oc" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.912002 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="524c38c5-5560-45a6-aa15-3010000b2165" containerName="oc" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.913375 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/must-gather-cwqbr" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.915652 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4plh5"/"default-dockercfg-lbpps" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.916615 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4plh5"/"openshift-service-ca.crt" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.918606 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4plh5"/"kube-root-ca.crt" Feb 26 12:10:04 crc kubenswrapper[4699]: I0226 12:10:04.936043 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4plh5/must-gather-cwqbr"] Feb 26 12:10:05 crc kubenswrapper[4699]: I0226 12:10:05.107632 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p96w\" (UniqueName: \"kubernetes.io/projected/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-kube-api-access-8p96w\") pod \"must-gather-cwqbr\" (UID: \"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9\") " pod="openshift-must-gather-4plh5/must-gather-cwqbr" Feb 26 12:10:05 crc kubenswrapper[4699]: I0226 12:10:05.107699 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-must-gather-output\") pod \"must-gather-cwqbr\" (UID: \"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9\") " pod="openshift-must-gather-4plh5/must-gather-cwqbr" Feb 26 12:10:05 crc kubenswrapper[4699]: I0226 12:10:05.210142 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p96w\" (UniqueName: \"kubernetes.io/projected/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-kube-api-access-8p96w\") pod \"must-gather-cwqbr\" (UID: \"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9\") " pod="openshift-must-gather-4plh5/must-gather-cwqbr" Feb 26 12:10:05 crc kubenswrapper[4699]: I0226 12:10:05.210217 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-must-gather-output\") pod \"must-gather-cwqbr\" (UID: \"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9\") " pod="openshift-must-gather-4plh5/must-gather-cwqbr" Feb 26 12:10:05 crc kubenswrapper[4699]: I0226 12:10:05.210765 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-must-gather-output\") pod \"must-gather-cwqbr\" (UID: \"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9\") " pod="openshift-must-gather-4plh5/must-gather-cwqbr" Feb 26 12:10:05 crc kubenswrapper[4699]: I0226 12:10:05.229460 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p96w\" (UniqueName: \"kubernetes.io/projected/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-kube-api-access-8p96w\") pod \"must-gather-cwqbr\" (UID: \"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9\") " pod="openshift-must-gather-4plh5/must-gather-cwqbr" Feb 26 12:10:05 crc kubenswrapper[4699]: I0226 12:10:05.233585 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/must-gather-cwqbr" Feb 26 12:10:05 crc kubenswrapper[4699]: I0226 12:10:05.256803 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535124-gqg7z"] Feb 26 12:10:05 crc kubenswrapper[4699]: I0226 12:10:05.265063 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535124-gqg7z"] Feb 26 12:10:05 crc kubenswrapper[4699]: I0226 12:10:05.666041 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4plh5/must-gather-cwqbr"] Feb 26 12:10:05 crc kubenswrapper[4699]: W0226 12:10:05.667632 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e6e42cb_6891_4f97_9ba8_b4c6ad63a7a9.slice/crio-0ef1f3bd591d4afff35f968e33ba667440df3e1ced5cc806faf7879a065504f3 WatchSource:0}: Error finding container 0ef1f3bd591d4afff35f968e33ba667440df3e1ced5cc806faf7879a065504f3: Status 404 returned error can't find the container with id 0ef1f3bd591d4afff35f968e33ba667440df3e1ced5cc806faf7879a065504f3 Feb 26 12:10:05 crc kubenswrapper[4699]: I0226 12:10:05.842778 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/must-gather-cwqbr" event={"ID":"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9","Type":"ContainerStarted","Data":"0ef1f3bd591d4afff35f968e33ba667440df3e1ced5cc806faf7879a065504f3"} Feb 26 12:10:06 crc kubenswrapper[4699]: I0226 12:10:06.280900 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af10706a-2423-4bb2-b0a5-de33b64b4b64" path="/var/lib/kubelet/pods/af10706a-2423-4bb2-b0a5-de33b64b4b64/volumes" Feb 26 12:10:12 crc kubenswrapper[4699]: I0226 12:10:12.905264 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/must-gather-cwqbr" event={"ID":"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9","Type":"ContainerStarted","Data":"40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd"} Feb 26 12:10:12 crc kubenswrapper[4699]: I0226 12:10:12.905829 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/must-gather-cwqbr" event={"ID":"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9","Type":"ContainerStarted","Data":"69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb"} Feb 26 12:10:12 crc kubenswrapper[4699]: I0226 12:10:12.933041 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4plh5/must-gather-cwqbr" podStartSLOduration=2.527169629 podStartE2EDuration="8.932993295s" podCreationTimestamp="2026-02-26 12:10:04 +0000 UTC" firstStartedPulling="2026-02-26 12:10:05.670956458 +0000 UTC m=+3551.481782892" lastFinishedPulling="2026-02-26 12:10:12.076780124 +0000 UTC m=+3557.887606558" observedRunningTime="2026-02-26 12:10:12.926170732 +0000 UTC m=+3558.736997186" watchObservedRunningTime="2026-02-26 12:10:12.932993295 +0000 UTC m=+3558.743819729" Feb 26 12:10:15 crc kubenswrapper[4699]: I0226 12:10:15.783317 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4plh5/crc-debug-bswb4"] Feb 26 12:10:15 crc kubenswrapper[4699]: I0226 12:10:15.785765 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-bswb4" Feb 26 12:10:15 crc kubenswrapper[4699]: I0226 12:10:15.834611 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpcq4\" (UniqueName: \"kubernetes.io/projected/864b723d-bc08-43f3-a5ec-718a0066eac0-kube-api-access-fpcq4\") pod \"crc-debug-bswb4\" (UID: \"864b723d-bc08-43f3-a5ec-718a0066eac0\") " pod="openshift-must-gather-4plh5/crc-debug-bswb4" Feb 26 12:10:15 crc kubenswrapper[4699]: I0226 12:10:15.834716 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/864b723d-bc08-43f3-a5ec-718a0066eac0-host\") pod \"crc-debug-bswb4\" (UID: \"864b723d-bc08-43f3-a5ec-718a0066eac0\") " pod="openshift-must-gather-4plh5/crc-debug-bswb4" Feb 26 12:10:15 crc kubenswrapper[4699]: I0226 12:10:15.936624 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpcq4\" (UniqueName: \"kubernetes.io/projected/864b723d-bc08-43f3-a5ec-718a0066eac0-kube-api-access-fpcq4\") pod \"crc-debug-bswb4\" (UID: \"864b723d-bc08-43f3-a5ec-718a0066eac0\") " pod="openshift-must-gather-4plh5/crc-debug-bswb4" Feb 26 12:10:15 crc kubenswrapper[4699]: I0226 12:10:15.936725 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/864b723d-bc08-43f3-a5ec-718a0066eac0-host\") pod \"crc-debug-bswb4\" (UID: \"864b723d-bc08-43f3-a5ec-718a0066eac0\") " pod="openshift-must-gather-4plh5/crc-debug-bswb4" Feb 26 12:10:15 crc kubenswrapper[4699]: I0226 12:10:15.936911 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/864b723d-bc08-43f3-a5ec-718a0066eac0-host\") pod \"crc-debug-bswb4\" (UID: \"864b723d-bc08-43f3-a5ec-718a0066eac0\") " pod="openshift-must-gather-4plh5/crc-debug-bswb4" Feb 26 12:10:15 crc kubenswrapper[4699]: I0226 12:10:15.956807 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpcq4\" (UniqueName: \"kubernetes.io/projected/864b723d-bc08-43f3-a5ec-718a0066eac0-kube-api-access-fpcq4\") pod \"crc-debug-bswb4\" (UID: \"864b723d-bc08-43f3-a5ec-718a0066eac0\") " pod="openshift-must-gather-4plh5/crc-debug-bswb4" Feb 26 12:10:16 crc kubenswrapper[4699]: I0226 12:10:16.112850 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-bswb4" Feb 26 12:10:16 crc kubenswrapper[4699]: I0226 12:10:16.943244 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/crc-debug-bswb4" event={"ID":"864b723d-bc08-43f3-a5ec-718a0066eac0","Type":"ContainerStarted","Data":"adeffda648e6a329b7a914fc4d25da8e13877082c895a3e7fe7e9714af65d8ec"} Feb 26 12:10:28 crc kubenswrapper[4699]: I0226 12:10:28.055865 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/crc-debug-bswb4" event={"ID":"864b723d-bc08-43f3-a5ec-718a0066eac0","Type":"ContainerStarted","Data":"548a0e8c1b14580465351f41c66bafc1b217669a68f00a69bd71038d87540f9f"} Feb 26 12:10:28 crc kubenswrapper[4699]: I0226 12:10:28.081057 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4plh5/crc-debug-bswb4" podStartSLOduration=1.687489682 podStartE2EDuration="13.081018608s" podCreationTimestamp="2026-02-26 12:10:15 +0000 UTC" firstStartedPulling="2026-02-26 12:10:16.196870879 +0000 UTC m=+3562.007697313" lastFinishedPulling="2026-02-26 12:10:27.590399805 +0000 UTC m=+3573.401226239" observedRunningTime="2026-02-26 12:10:28.07051953 +0000 UTC m=+3573.881345974" watchObservedRunningTime="2026-02-26 12:10:28.081018608 +0000 UTC m=+3573.891845062" Feb 26 12:10:39 crc kubenswrapper[4699]: I0226 12:10:39.616159 4699 scope.go:117] "RemoveContainer" containerID="bee6179034d0d615200cc2b0cca46b2b7ac3bbc955a96024e317fe4212ffc149" Feb 26 12:11:09 crc kubenswrapper[4699]: I0226 12:11:09.428849 4699 generic.go:334] "Generic (PLEG): container finished" podID="864b723d-bc08-43f3-a5ec-718a0066eac0" containerID="548a0e8c1b14580465351f41c66bafc1b217669a68f00a69bd71038d87540f9f" exitCode=0 Feb 26 12:11:09 crc kubenswrapper[4699]: I0226 12:11:09.428897 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/crc-debug-bswb4" event={"ID":"864b723d-bc08-43f3-a5ec-718a0066eac0","Type":"ContainerDied","Data":"548a0e8c1b14580465351f41c66bafc1b217669a68f00a69bd71038d87540f9f"} Feb 26 12:11:10 crc kubenswrapper[4699]: I0226 12:11:10.530570 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-bswb4" Feb 26 12:11:10 crc kubenswrapper[4699]: I0226 12:11:10.565782 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4plh5/crc-debug-bswb4"] Feb 26 12:11:10 crc kubenswrapper[4699]: I0226 12:11:10.575058 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4plh5/crc-debug-bswb4"] Feb 26 12:11:10 crc kubenswrapper[4699]: I0226 12:11:10.609230 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpcq4\" (UniqueName: \"kubernetes.io/projected/864b723d-bc08-43f3-a5ec-718a0066eac0-kube-api-access-fpcq4\") pod \"864b723d-bc08-43f3-a5ec-718a0066eac0\" (UID: \"864b723d-bc08-43f3-a5ec-718a0066eac0\") " Feb 26 12:11:10 crc kubenswrapper[4699]: I0226 12:11:10.609348 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/864b723d-bc08-43f3-a5ec-718a0066eac0-host\") pod \"864b723d-bc08-43f3-a5ec-718a0066eac0\" (UID: \"864b723d-bc08-43f3-a5ec-718a0066eac0\") " Feb 26 12:11:10 crc kubenswrapper[4699]: I0226 12:11:10.609680 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/864b723d-bc08-43f3-a5ec-718a0066eac0-host" (OuterVolumeSpecName: "host") pod "864b723d-bc08-43f3-a5ec-718a0066eac0" (UID: "864b723d-bc08-43f3-a5ec-718a0066eac0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 12:11:10 crc kubenswrapper[4699]: I0226 12:11:10.610091 4699 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/864b723d-bc08-43f3-a5ec-718a0066eac0-host\") on node \"crc\" DevicePath \"\"" Feb 26 12:11:10 crc kubenswrapper[4699]: I0226 12:11:10.616347 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864b723d-bc08-43f3-a5ec-718a0066eac0-kube-api-access-fpcq4" (OuterVolumeSpecName: "kube-api-access-fpcq4") pod "864b723d-bc08-43f3-a5ec-718a0066eac0" (UID: "864b723d-bc08-43f3-a5ec-718a0066eac0"). InnerVolumeSpecName "kube-api-access-fpcq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:11:10 crc kubenswrapper[4699]: I0226 12:11:10.711768 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpcq4\" (UniqueName: \"kubernetes.io/projected/864b723d-bc08-43f3-a5ec-718a0066eac0-kube-api-access-fpcq4\") on node \"crc\" DevicePath \"\"" Feb 26 12:11:11 crc kubenswrapper[4699]: I0226 12:11:11.448441 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adeffda648e6a329b7a914fc4d25da8e13877082c895a3e7fe7e9714af65d8ec" Feb 26 12:11:11 crc kubenswrapper[4699]: I0226 12:11:11.448481 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-bswb4" Feb 26 12:11:11 crc kubenswrapper[4699]: I0226 12:11:11.584959 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:11:11 crc kubenswrapper[4699]: I0226 12:11:11.585045 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:11:11 crc kubenswrapper[4699]: I0226 12:11:11.853013 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4plh5/crc-debug-9d4cm"] Feb 26 12:11:11 crc kubenswrapper[4699]: E0226 12:11:11.853485 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864b723d-bc08-43f3-a5ec-718a0066eac0" containerName="container-00" Feb 26 12:11:11 crc kubenswrapper[4699]: I0226 12:11:11.853510 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="864b723d-bc08-43f3-a5ec-718a0066eac0" containerName="container-00" Feb 26 12:11:11 crc kubenswrapper[4699]: I0226 12:11:11.853777 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="864b723d-bc08-43f3-a5ec-718a0066eac0" containerName="container-00" Feb 26 12:11:11 crc kubenswrapper[4699]: I0226 12:11:11.854591 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-9d4cm" Feb 26 12:11:11 crc kubenswrapper[4699]: I0226 12:11:11.936144 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg6zg\" (UniqueName: \"kubernetes.io/projected/0e0a84dc-a0ad-4da3-b004-98594b781410-kube-api-access-wg6zg\") pod \"crc-debug-9d4cm\" (UID: \"0e0a84dc-a0ad-4da3-b004-98594b781410\") " pod="openshift-must-gather-4plh5/crc-debug-9d4cm" Feb 26 12:11:11 crc kubenswrapper[4699]: I0226 12:11:11.936242 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e0a84dc-a0ad-4da3-b004-98594b781410-host\") pod \"crc-debug-9d4cm\" (UID: \"0e0a84dc-a0ad-4da3-b004-98594b781410\") " pod="openshift-must-gather-4plh5/crc-debug-9d4cm" Feb 26 12:11:12 crc kubenswrapper[4699]: I0226 12:11:12.038840 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg6zg\" (UniqueName: \"kubernetes.io/projected/0e0a84dc-a0ad-4da3-b004-98594b781410-kube-api-access-wg6zg\") pod \"crc-debug-9d4cm\" (UID: \"0e0a84dc-a0ad-4da3-b004-98594b781410\") " pod="openshift-must-gather-4plh5/crc-debug-9d4cm" Feb 26 12:11:12 crc kubenswrapper[4699]: I0226 12:11:12.038933 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e0a84dc-a0ad-4da3-b004-98594b781410-host\") pod \"crc-debug-9d4cm\" (UID: \"0e0a84dc-a0ad-4da3-b004-98594b781410\") " pod="openshift-must-gather-4plh5/crc-debug-9d4cm" Feb 26 12:11:12 crc kubenswrapper[4699]: I0226 12:11:12.039310 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e0a84dc-a0ad-4da3-b004-98594b781410-host\") pod \"crc-debug-9d4cm\" (UID: \"0e0a84dc-a0ad-4da3-b004-98594b781410\") " pod="openshift-must-gather-4plh5/crc-debug-9d4cm" Feb 26 12:11:12 crc kubenswrapper[4699]: I0226 12:11:12.061983 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg6zg\" (UniqueName: \"kubernetes.io/projected/0e0a84dc-a0ad-4da3-b004-98594b781410-kube-api-access-wg6zg\") pod \"crc-debug-9d4cm\" (UID: \"0e0a84dc-a0ad-4da3-b004-98594b781410\") " pod="openshift-must-gather-4plh5/crc-debug-9d4cm" Feb 26 12:11:12 crc kubenswrapper[4699]: I0226 12:11:12.170591 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-9d4cm" Feb 26 12:11:12 crc kubenswrapper[4699]: I0226 12:11:12.277756 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864b723d-bc08-43f3-a5ec-718a0066eac0" path="/var/lib/kubelet/pods/864b723d-bc08-43f3-a5ec-718a0066eac0/volumes" Feb 26 12:11:12 crc kubenswrapper[4699]: I0226 12:11:12.455937 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/crc-debug-9d4cm" event={"ID":"0e0a84dc-a0ad-4da3-b004-98594b781410","Type":"ContainerStarted","Data":"1830ebb83943317d1452f94ddd1bbd24c88d43dcb4e4541e0c9c10d16e425c29"} Feb 26 12:11:12 crc kubenswrapper[4699]: I0226 12:11:12.455983 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/crc-debug-9d4cm" event={"ID":"0e0a84dc-a0ad-4da3-b004-98594b781410","Type":"ContainerStarted","Data":"2d733d9b3e91b35010efb39dfda838f4c195644dab97fb7f5913b7b91c78c259"} Feb 26 12:11:12 crc kubenswrapper[4699]: I0226 12:11:12.478714 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4plh5/crc-debug-9d4cm" podStartSLOduration=1.478692661 podStartE2EDuration="1.478692661s" podCreationTimestamp="2026-02-26 12:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 12:11:12.470850299 +0000 UTC m=+3618.281676753" watchObservedRunningTime="2026-02-26 12:11:12.478692661 +0000 UTC m=+3618.289519095" Feb 26 12:11:13 crc kubenswrapper[4699]: I0226 12:11:13.467504 4699 generic.go:334] "Generic (PLEG): container finished" podID="0e0a84dc-a0ad-4da3-b004-98594b781410" containerID="1830ebb83943317d1452f94ddd1bbd24c88d43dcb4e4541e0c9c10d16e425c29" exitCode=0 Feb 26 12:11:13 crc kubenswrapper[4699]: I0226 12:11:13.467605 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/crc-debug-9d4cm" event={"ID":"0e0a84dc-a0ad-4da3-b004-98594b781410","Type":"ContainerDied","Data":"1830ebb83943317d1452f94ddd1bbd24c88d43dcb4e4541e0c9c10d16e425c29"} Feb 26 12:11:14 crc kubenswrapper[4699]: I0226 12:11:14.572231 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-9d4cm" Feb 26 12:11:14 crc kubenswrapper[4699]: I0226 12:11:14.627018 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4plh5/crc-debug-9d4cm"] Feb 26 12:11:14 crc kubenswrapper[4699]: I0226 12:11:14.638270 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4plh5/crc-debug-9d4cm"] Feb 26 12:11:14 crc kubenswrapper[4699]: I0226 12:11:14.687344 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e0a84dc-a0ad-4da3-b004-98594b781410-host\") pod \"0e0a84dc-a0ad-4da3-b004-98594b781410\" (UID: \"0e0a84dc-a0ad-4da3-b004-98594b781410\") " Feb 26 12:11:14 crc kubenswrapper[4699]: I0226 12:11:14.687607 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg6zg\" (UniqueName: \"kubernetes.io/projected/0e0a84dc-a0ad-4da3-b004-98594b781410-kube-api-access-wg6zg\") pod \"0e0a84dc-a0ad-4da3-b004-98594b781410\" (UID: \"0e0a84dc-a0ad-4da3-b004-98594b781410\") " Feb 26 12:11:14 crc kubenswrapper[4699]: I0226 12:11:14.687830 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e0a84dc-a0ad-4da3-b004-98594b781410-host" (OuterVolumeSpecName: "host") pod "0e0a84dc-a0ad-4da3-b004-98594b781410" (UID: "0e0a84dc-a0ad-4da3-b004-98594b781410"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 12:11:14 crc kubenswrapper[4699]: I0226 12:11:14.688081 4699 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e0a84dc-a0ad-4da3-b004-98594b781410-host\") on node \"crc\" DevicePath \"\"" Feb 26 12:11:14 crc kubenswrapper[4699]: I0226 12:11:14.694854 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0a84dc-a0ad-4da3-b004-98594b781410-kube-api-access-wg6zg" (OuterVolumeSpecName: "kube-api-access-wg6zg") pod "0e0a84dc-a0ad-4da3-b004-98594b781410" (UID: "0e0a84dc-a0ad-4da3-b004-98594b781410"). InnerVolumeSpecName "kube-api-access-wg6zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:11:14 crc kubenswrapper[4699]: I0226 12:11:14.790030 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg6zg\" (UniqueName: \"kubernetes.io/projected/0e0a84dc-a0ad-4da3-b004-98594b781410-kube-api-access-wg6zg\") on node \"crc\" DevicePath \"\"" Feb 26 12:11:15 crc kubenswrapper[4699]: I0226 12:11:15.489663 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d733d9b3e91b35010efb39dfda838f4c195644dab97fb7f5913b7b91c78c259" Feb 26 12:11:15 crc kubenswrapper[4699]: I0226 12:11:15.489820 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-9d4cm" Feb 26 12:11:15 crc kubenswrapper[4699]: I0226 12:11:15.813076 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4plh5/crc-debug-tbmzf"] Feb 26 12:11:15 crc kubenswrapper[4699]: E0226 12:11:15.813797 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0a84dc-a0ad-4da3-b004-98594b781410" containerName="container-00" Feb 26 12:11:15 crc kubenswrapper[4699]: I0226 12:11:15.813810 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0a84dc-a0ad-4da3-b004-98594b781410" containerName="container-00" Feb 26 12:11:15 crc kubenswrapper[4699]: I0226 12:11:15.814038 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e0a84dc-a0ad-4da3-b004-98594b781410" containerName="container-00" Feb 26 12:11:15 crc kubenswrapper[4699]: I0226 12:11:15.814719 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-tbmzf" Feb 26 12:11:15 crc kubenswrapper[4699]: I0226 12:11:15.912623 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtbls\" (UniqueName: \"kubernetes.io/projected/e9944e38-a08c-4638-b132-1841c82d51c2-kube-api-access-jtbls\") pod \"crc-debug-tbmzf\" (UID: \"e9944e38-a08c-4638-b132-1841c82d51c2\") " pod="openshift-must-gather-4plh5/crc-debug-tbmzf" Feb 26 12:11:15 crc kubenswrapper[4699]: I0226 12:11:15.912671 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9944e38-a08c-4638-b132-1841c82d51c2-host\") pod \"crc-debug-tbmzf\" (UID: \"e9944e38-a08c-4638-b132-1841c82d51c2\") " pod="openshift-must-gather-4plh5/crc-debug-tbmzf" Feb 26 12:11:16 crc kubenswrapper[4699]: I0226 12:11:16.014341 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtbls\" (UniqueName: \"kubernetes.io/projected/e9944e38-a08c-4638-b132-1841c82d51c2-kube-api-access-jtbls\") pod \"crc-debug-tbmzf\" (UID: \"e9944e38-a08c-4638-b132-1841c82d51c2\") " pod="openshift-must-gather-4plh5/crc-debug-tbmzf" Feb 26 12:11:16 crc kubenswrapper[4699]: I0226 12:11:16.014415 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9944e38-a08c-4638-b132-1841c82d51c2-host\") pod \"crc-debug-tbmzf\" (UID: \"e9944e38-a08c-4638-b132-1841c82d51c2\") " pod="openshift-must-gather-4plh5/crc-debug-tbmzf" Feb 26 12:11:16 crc kubenswrapper[4699]: I0226 12:11:16.014551 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9944e38-a08c-4638-b132-1841c82d51c2-host\") pod \"crc-debug-tbmzf\" (UID: \"e9944e38-a08c-4638-b132-1841c82d51c2\") " pod="openshift-must-gather-4plh5/crc-debug-tbmzf" Feb 26 12:11:16 crc kubenswrapper[4699]: I0226 12:11:16.038285 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtbls\" (UniqueName: \"kubernetes.io/projected/e9944e38-a08c-4638-b132-1841c82d51c2-kube-api-access-jtbls\") pod \"crc-debug-tbmzf\" (UID: \"e9944e38-a08c-4638-b132-1841c82d51c2\") " pod="openshift-must-gather-4plh5/crc-debug-tbmzf" Feb 26 12:11:16 crc kubenswrapper[4699]: I0226 12:11:16.138397 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-tbmzf" Feb 26 12:11:16 crc kubenswrapper[4699]: I0226 12:11:16.273199 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0a84dc-a0ad-4da3-b004-98594b781410" path="/var/lib/kubelet/pods/0e0a84dc-a0ad-4da3-b004-98594b781410/volumes" Feb 26 12:11:16 crc kubenswrapper[4699]: I0226 12:11:16.500442 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/crc-debug-tbmzf" event={"ID":"e9944e38-a08c-4638-b132-1841c82d51c2","Type":"ContainerStarted","Data":"ad8cfdb739766e618ecc3a2f4fa20f12f46798f6783950e3cb56a1c6c7d53124"} Feb 26 12:11:17 crc kubenswrapper[4699]: I0226 12:11:17.510982 4699 generic.go:334] "Generic (PLEG): container finished" podID="e9944e38-a08c-4638-b132-1841c82d51c2" containerID="80c52c8f64990f9768d3a0c9c4bf05f19b3350e70a27d2f9ab510f7d7259fb47" exitCode=0 Feb 26 12:11:17 crc kubenswrapper[4699]: I0226 12:11:17.511038 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/crc-debug-tbmzf" event={"ID":"e9944e38-a08c-4638-b132-1841c82d51c2","Type":"ContainerDied","Data":"80c52c8f64990f9768d3a0c9c4bf05f19b3350e70a27d2f9ab510f7d7259fb47"} Feb 26 12:11:17 crc kubenswrapper[4699]: I0226 12:11:17.557532 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4plh5/crc-debug-tbmzf"] Feb 26 12:11:17 crc kubenswrapper[4699]: I0226 12:11:17.570544 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4plh5/crc-debug-tbmzf"] Feb 26 12:11:18 crc kubenswrapper[4699]: I0226 12:11:18.629860 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-tbmzf" Feb 26 12:11:18 crc kubenswrapper[4699]: I0226 12:11:18.766728 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtbls\" (UniqueName: \"kubernetes.io/projected/e9944e38-a08c-4638-b132-1841c82d51c2-kube-api-access-jtbls\") pod \"e9944e38-a08c-4638-b132-1841c82d51c2\" (UID: \"e9944e38-a08c-4638-b132-1841c82d51c2\") " Feb 26 12:11:18 crc kubenswrapper[4699]: I0226 12:11:18.766858 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9944e38-a08c-4638-b132-1841c82d51c2-host\") pod \"e9944e38-a08c-4638-b132-1841c82d51c2\" (UID: \"e9944e38-a08c-4638-b132-1841c82d51c2\") " Feb 26 12:11:18 crc kubenswrapper[4699]: I0226 12:11:18.766938 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9944e38-a08c-4638-b132-1841c82d51c2-host" (OuterVolumeSpecName: "host") pod "e9944e38-a08c-4638-b132-1841c82d51c2" (UID: "e9944e38-a08c-4638-b132-1841c82d51c2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 12:11:18 crc kubenswrapper[4699]: I0226 12:11:18.767337 4699 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9944e38-a08c-4638-b132-1841c82d51c2-host\") on node \"crc\" DevicePath \"\"" Feb 26 12:11:18 crc kubenswrapper[4699]: I0226 12:11:18.772590 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9944e38-a08c-4638-b132-1841c82d51c2-kube-api-access-jtbls" (OuterVolumeSpecName: "kube-api-access-jtbls") pod "e9944e38-a08c-4638-b132-1841c82d51c2" (UID: "e9944e38-a08c-4638-b132-1841c82d51c2"). InnerVolumeSpecName "kube-api-access-jtbls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:11:18 crc kubenswrapper[4699]: I0226 12:11:18.868810 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtbls\" (UniqueName: \"kubernetes.io/projected/e9944e38-a08c-4638-b132-1841c82d51c2-kube-api-access-jtbls\") on node \"crc\" DevicePath \"\"" Feb 26 12:11:19 crc kubenswrapper[4699]: I0226 12:11:19.529813 4699 scope.go:117] "RemoveContainer" containerID="80c52c8f64990f9768d3a0c9c4bf05f19b3350e70a27d2f9ab510f7d7259fb47" Feb 26 12:11:19 crc kubenswrapper[4699]: I0226 12:11:19.529861 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/crc-debug-tbmzf" Feb 26 12:11:20 crc kubenswrapper[4699]: I0226 12:11:20.272393 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9944e38-a08c-4638-b132-1841c82d51c2" path="/var/lib/kubelet/pods/e9944e38-a08c-4638-b132-1841c82d51c2/volumes" Feb 26 12:11:33 crc kubenswrapper[4699]: I0226 12:11:33.633093 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-977f89944-b96zk_dd004e01-9dac-4316-b6ee-05c1a0f20713/barbican-api/0.log" Feb 26 12:11:33 crc kubenswrapper[4699]: I0226 12:11:33.812784 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-977f89944-b96zk_dd004e01-9dac-4316-b6ee-05c1a0f20713/barbican-api-log/0.log" Feb 26 12:11:33 crc kubenswrapper[4699]: I0226 12:11:33.830243 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb8c656f4-cl8tt_770f4ffe-352c-416b-8f67-a894c4107003/barbican-keystone-listener/0.log" Feb 26 12:11:33 crc kubenswrapper[4699]: I0226 12:11:33.908642 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb8c656f4-cl8tt_770f4ffe-352c-416b-8f67-a894c4107003/barbican-keystone-listener-log/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.019152 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6596b66679-qmv4f_edb59470-4038-48c2-a3ec-f3046406a971/barbican-worker/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.054823 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6596b66679-qmv4f_edb59470-4038-48c2-a3ec-f3046406a971/barbican-worker-log/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.233294 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj_fee4a36b-0896-43c1-9b23-3da3ae870cbe/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.259557 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_09a6eb79-27c3-465b-adae-b32d96c56b65/ceilometer-central-agent/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.334734 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_09a6eb79-27c3-465b-adae-b32d96c56b65/ceilometer-notification-agent/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.404978 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_09a6eb79-27c3-465b-adae-b32d96c56b65/proxy-httpd/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.470727 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_09a6eb79-27c3-465b-adae-b32d96c56b65/sg-core/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.588243 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c2c2d2c1-e68e-4b14-a732-3b42a6132503/cinder-api/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.614106 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c2c2d2c1-e68e-4b14-a732-3b42a6132503/cinder-api-log/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.745155 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fbf1f488-444f-45d3-b5e6-44506bf45f8e/cinder-scheduler/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.885593 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fbf1f488-444f-45d3-b5e6-44506bf45f8e/probe/0.log" Feb 26 12:11:34 crc kubenswrapper[4699]: I0226 12:11:34.974374 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-86gl7_b1a06be0-15ce-4abd-b9e7-7e11e789bd64/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:35 crc kubenswrapper[4699]: I0226 12:11:35.085341 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-h9q25_85e0d37e-fb25-4bbc-afe5-7e6ab304390c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:35 crc kubenswrapper[4699]: I0226 12:11:35.244894 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-hddfn_24dd88a8-4737-4ebc-8925-b2bcedb760c2/init/0.log" Feb 26 12:11:35 crc kubenswrapper[4699]: I0226 12:11:35.533541 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-hddfn_24dd88a8-4737-4ebc-8925-b2bcedb760c2/dnsmasq-dns/0.log" Feb 26 12:11:35 crc kubenswrapper[4699]: I0226 12:11:35.557598 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-hddfn_24dd88a8-4737-4ebc-8925-b2bcedb760c2/init/0.log" Feb 26 12:11:35 crc kubenswrapper[4699]: I0226 12:11:35.635227 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-f97wz_8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:36 crc kubenswrapper[4699]: I0226 12:11:36.005725 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9c58ea0a-4ad4-47cf-8976-a004ef7e56da/glance-httpd/0.log" Feb 26 12:11:36 crc kubenswrapper[4699]: I0226 12:11:36.020901 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9c58ea0a-4ad4-47cf-8976-a004ef7e56da/glance-log/0.log" Feb 26 12:11:36 crc kubenswrapper[4699]: I0226 12:11:36.173371 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_796738f1-8a6c-4e91-bdfe-bee2f252b3fc/glance-httpd/0.log" Feb 26 12:11:36 crc kubenswrapper[4699]: I0226 12:11:36.206904 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_796738f1-8a6c-4e91-bdfe-bee2f252b3fc/glance-log/0.log" Feb 26 12:11:36 crc kubenswrapper[4699]: I0226 12:11:36.409461 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5795557cd8-dvzqq_15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0/horizon/0.log" Feb 26 12:11:36 crc kubenswrapper[4699]: I0226 12:11:36.615846 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv_e537c30c-dc6b-406f-bb86-5540ebd8a36d/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:36 crc kubenswrapper[4699]: I0226 12:11:36.627649 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5795557cd8-dvzqq_15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0/horizon-log/0.log" Feb 26 12:11:36 crc kubenswrapper[4699]: I0226 12:11:36.725913 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-mlb2f_ac66647f-74c0-4a4e-9925-e47cd90568a1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:36 crc kubenswrapper[4699]: I0226 12:11:36.947417 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29535121-plvtd_ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68/keystone-cron/0.log" Feb 26 12:11:36 crc kubenswrapper[4699]: I0226 12:11:36.996863 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-67d4f89fb9-65kmq_5d9e1983-3363-4542-a5f0-deb132ea6994/keystone-api/0.log" Feb 26 12:11:37 crc kubenswrapper[4699]: I0226 12:11:37.144509 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c685fadd-b283-40bc-9de2-3372317b9875/kube-state-metrics/0.log" Feb 26 12:11:37 crc kubenswrapper[4699]: I0226 12:11:37.227917 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f_6436c321-6850-4db3-81b2-0dc329e10900/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:37 crc kubenswrapper[4699]: I0226 12:11:37.619104 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d45896d49-mh862_862cb546-78f8-4864-a158-9dc217ec2796/neutron-httpd/0.log" Feb 26 12:11:37 crc kubenswrapper[4699]: I0226 12:11:37.637991 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d45896d49-mh862_862cb546-78f8-4864-a158-9dc217ec2796/neutron-api/0.log" Feb 26 12:11:38 crc kubenswrapper[4699]: I0226 12:11:38.259283 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l_59456382-a459-4f82-ac99-b96eb735ddb9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:38 crc kubenswrapper[4699]: I0226 12:11:38.671284 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2ff15a2d-962f-421b-be00-e3bf6ef22612/nova-cell0-conductor-conductor/0.log" Feb 26 12:11:38 crc kubenswrapper[4699]: I0226 12:11:38.693258 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2d0d807f-7fdc-4239-b7bb-1952c2f7c222/nova-api-log/0.log" Feb 26 12:11:38 crc kubenswrapper[4699]: I0226 12:11:38.876584 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2d0d807f-7fdc-4239-b7bb-1952c2f7c222/nova-api-api/0.log" Feb 26 12:11:39 crc kubenswrapper[4699]: I0226 12:11:39.058546 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ff2b3846-c197-4cc6-a442-0f466d97d53d/nova-cell1-conductor-conductor/0.log" Feb 26 12:11:39 crc kubenswrapper[4699]: I0226 12:11:39.094617 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8bb28763-ceae-456c-a0d6-5df33b478106/nova-cell1-novncproxy-novncproxy/0.log" Feb 26 12:11:39 crc kubenswrapper[4699]: I0226 12:11:39.377813 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-wv666_2c2e8329-038c-4347-b30f-f8b42f36cc67/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:39 crc kubenswrapper[4699]: I0226 12:11:39.530295 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_15752dfa-4afb-412f-99a0-75c5fe76f6a8/nova-metadata-log/0.log" Feb 26 12:11:39 crc kubenswrapper[4699]: I0226 12:11:39.863705 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_edce8e75-6dd5-4fbd-8f76-bc6553cc27b9/mysql-bootstrap/0.log" Feb 26 12:11:39 crc kubenswrapper[4699]: I0226 12:11:39.899447 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9d8371db-373f-4a41-97cb-b2d00aa17571/nova-scheduler-scheduler/0.log" Feb 26 12:11:40 crc kubenswrapper[4699]: I0226 12:11:40.080756 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_edce8e75-6dd5-4fbd-8f76-bc6553cc27b9/mysql-bootstrap/0.log" Feb 26 12:11:40 crc kubenswrapper[4699]: I0226 12:11:40.126223 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_edce8e75-6dd5-4fbd-8f76-bc6553cc27b9/galera/0.log" Feb 26 12:11:40 crc kubenswrapper[4699]: I0226 12:11:40.324101 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6fdc6b6d-ac77-4179-9864-f220d622c0f4/mysql-bootstrap/0.log" Feb 26 12:11:40 crc kubenswrapper[4699]: I0226 12:11:40.506439 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6fdc6b6d-ac77-4179-9864-f220d622c0f4/mysql-bootstrap/0.log" Feb 26 12:11:40 crc kubenswrapper[4699]: I0226 12:11:40.513947 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6fdc6b6d-ac77-4179-9864-f220d622c0f4/galera/0.log" Feb 26 12:11:40 crc kubenswrapper[4699]: I0226 12:11:40.678676 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_16db7cc3-bd7c-44aa-b92f-d2a645d96ef0/openstackclient/0.log" Feb 26 12:11:40 crc kubenswrapper[4699]: I0226 12:11:40.696744 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_15752dfa-4afb-412f-99a0-75c5fe76f6a8/nova-metadata-metadata/0.log" Feb 26 12:11:40 crc kubenswrapper[4699]: I0226 12:11:40.798972 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qfxsz_a4767003-9eba-4b86-933c-5bcbaa93e458/openstack-network-exporter/0.log" Feb 26 12:11:40 crc kubenswrapper[4699]: I0226 12:11:40.983440 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gxnxl_8afc038e-11dc-4959-a6b0-61e9b1c2dc35/ovsdb-server-init/0.log" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.001256 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nrvng_cd4015f0-f1a7-40d7-ae69-089f74a6873d/ovn-controller/0.log" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.215422 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gxnxl_8afc038e-11dc-4959-a6b0-61e9b1c2dc35/ovs-vswitchd/0.log" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.222768 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gxnxl_8afc038e-11dc-4959-a6b0-61e9b1c2dc35/ovsdb-server-init/0.log" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.274107 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gxnxl_8afc038e-11dc-4959-a6b0-61e9b1c2dc35/ovsdb-server/0.log" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.431886 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-hmpqg_dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.531379 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8fbd47d6-02c1-4ac4-a981-231eb0f13530/ovn-northd/0.log" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.549601 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8fbd47d6-02c1-4ac4-a981-231eb0f13530/openstack-network-exporter/0.log" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.584618 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.584742 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.714465 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ef805480-81ec-4d0b-b2ca-06db4bf74383/openstack-network-exporter/0.log" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.733908 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ef805480-81ec-4d0b-b2ca-06db4bf74383/ovsdbserver-nb/0.log" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.860028 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b981c8a5-ce76-4bc1-a018-28255391e3f2/openstack-network-exporter/0.log" Feb 26 12:11:41 crc kubenswrapper[4699]: I0226 12:11:41.946074 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b981c8a5-ce76-4bc1-a018-28255391e3f2/ovsdbserver-sb/0.log" Feb 26 12:11:42 crc kubenswrapper[4699]: I0226 12:11:42.071762 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d4878dd78-qpvzg_b7700bd0-21d8-4b96-9753-2619443038a3/placement-api/0.log" Feb 26 12:11:42 crc kubenswrapper[4699]: I0226 12:11:42.158883 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d4878dd78-qpvzg_b7700bd0-21d8-4b96-9753-2619443038a3/placement-log/0.log" Feb 26 12:11:42 crc kubenswrapper[4699]: I0226 12:11:42.261612 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b731314-eb90-4a19-a425-2f9282af2a7f/setup-container/0.log" Feb 26 12:11:42 crc kubenswrapper[4699]: I0226 12:11:42.491345 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b731314-eb90-4a19-a425-2f9282af2a7f/setup-container/0.log" Feb 26 12:11:42 crc kubenswrapper[4699]: I0226 12:11:42.493798 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0d9b2e6e-c43b-49ae-a71e-844610621e3e/setup-container/0.log" Feb 26 12:11:42 crc kubenswrapper[4699]: I0226 12:11:42.568771 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b731314-eb90-4a19-a425-2f9282af2a7f/rabbitmq/0.log" Feb 26 12:11:42 crc kubenswrapper[4699]: I0226 12:11:42.716212 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0d9b2e6e-c43b-49ae-a71e-844610621e3e/setup-container/0.log" Feb 26 12:11:42 crc kubenswrapper[4699]: I0226 12:11:42.774181 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l_a1aabb80-3c23-4f5a-9bd1-4d573089856c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:42 crc kubenswrapper[4699]: I0226 12:11:42.868508 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0d9b2e6e-c43b-49ae-a71e-844610621e3e/rabbitmq/0.log" Feb 26 12:11:43 crc kubenswrapper[4699]: I0226 12:11:43.301760 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zdf2z_fcea0fcf-0c80-4334-9327-f0a57b385cc9/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:43 crc kubenswrapper[4699]: I0226 12:11:43.312855 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n_57bbec48-f33e-43b8-9f82-8cc3a42e7723/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:43 crc kubenswrapper[4699]: I0226 12:11:43.599724 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8w2tv_96b6beba-4e99-4cb7-b49b-3f211c5e12b7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:43 crc kubenswrapper[4699]: I0226 12:11:43.670541 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-t4sjg_2930a730-d5e2-49e1-a618-7428b999a73d/ssh-known-hosts-edpm-deployment/0.log" Feb 26 12:11:43 crc kubenswrapper[4699]: I0226 12:11:43.837564 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78cbc76b59-m6shv_5a4ece68-df2a-480c-9531-1d133d7f4bd0/proxy-server/0.log" Feb 26 12:11:43 crc kubenswrapper[4699]: I0226 12:11:43.982414 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78cbc76b59-m6shv_5a4ece68-df2a-480c-9531-1d133d7f4bd0/proxy-httpd/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.267675 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-lqqdx_9125ee3a-a0b6-469b-b79d-3a376f2d5d91/swift-ring-rebalance/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.316297 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/account-reaper/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.384631 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/account-auditor/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.488004 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/account-replicator/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.545986 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/account-server/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.554933 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/container-auditor/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.614180 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/container-replicator/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.737765 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/container-server/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.744929 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/object-auditor/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.755963 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/container-updater/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.845902 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/object-expirer/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.951191 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/object-server/0.log" Feb 26 12:11:44 crc kubenswrapper[4699]: I0226 12:11:44.956139 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/object-replicator/0.log" Feb 26 12:11:45 crc kubenswrapper[4699]: I0226 12:11:45.009914 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/object-updater/0.log" Feb 26 12:11:45 crc kubenswrapper[4699]: I0226 12:11:45.073016 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/rsync/0.log" Feb 26 12:11:45 crc kubenswrapper[4699]: I0226 12:11:45.184206 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/swift-recon-cron/0.log" Feb 26 12:11:45 crc kubenswrapper[4699]: I0226 12:11:45.305632 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9_08bdd16a-fc18-4262-9175-a05b613a76c9/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:45 crc kubenswrapper[4699]: I0226 12:11:45.524998 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_19e02200-91be-49f8-8174-4a0bf6cda9dd/tempest-tests-tempest-tests-runner/0.log" Feb 26 12:11:45 crc kubenswrapper[4699]: I0226 12:11:45.547437 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_66beadbe-fd5d-48af-8a33-8a652c8d1c71/test-operator-logs-container/0.log" Feb 26 12:11:45 crc kubenswrapper[4699]: I0226 12:11:45.686077 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9npsm_974c869a-b430-4a83-81d0-ece37d67c0b0/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:11:52 crc kubenswrapper[4699]: I0226 12:11:52.915935 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2/memcached/0.log" Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.143305 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535132-hr4rf"] Feb 26 12:12:00 crc kubenswrapper[4699]: E0226 12:12:00.146176 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9944e38-a08c-4638-b132-1841c82d51c2" containerName="container-00" Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.146298 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9944e38-a08c-4638-b132-1841c82d51c2" containerName="container-00" Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.146700 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9944e38-a08c-4638-b132-1841c82d51c2" containerName="container-00" Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.147555 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535132-hr4rf" Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.152044 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.152284 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.152399 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.156790 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535132-hr4rf"] Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.350889 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kshk8\" (UniqueName: \"kubernetes.io/projected/5ff3c3e0-4f58-401c-9f5f-b733727f73ff-kube-api-access-kshk8\") pod \"auto-csr-approver-29535132-hr4rf\" (UID: \"5ff3c3e0-4f58-401c-9f5f-b733727f73ff\") " pod="openshift-infra/auto-csr-approver-29535132-hr4rf" Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.453350 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kshk8\" (UniqueName: \"kubernetes.io/projected/5ff3c3e0-4f58-401c-9f5f-b733727f73ff-kube-api-access-kshk8\") pod \"auto-csr-approver-29535132-hr4rf\" (UID: \"5ff3c3e0-4f58-401c-9f5f-b733727f73ff\") " pod="openshift-infra/auto-csr-approver-29535132-hr4rf" Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.475751 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kshk8\" (UniqueName: \"kubernetes.io/projected/5ff3c3e0-4f58-401c-9f5f-b733727f73ff-kube-api-access-kshk8\") pod \"auto-csr-approver-29535132-hr4rf\" (UID: \"5ff3c3e0-4f58-401c-9f5f-b733727f73ff\") " pod="openshift-infra/auto-csr-approver-29535132-hr4rf" Feb 26 12:12:00 crc kubenswrapper[4699]: I0226 12:12:00.768257 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535132-hr4rf" Feb 26 12:12:01 crc kubenswrapper[4699]: I0226 12:12:01.200060 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535132-hr4rf"] Feb 26 12:12:01 crc kubenswrapper[4699]: I0226 12:12:01.929962 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535132-hr4rf" event={"ID":"5ff3c3e0-4f58-401c-9f5f-b733727f73ff","Type":"ContainerStarted","Data":"c74e194aee577dd844b3d47a42b28777ae42ce708f0b112dd1f7c9633e886051"} Feb 26 12:12:02 crc kubenswrapper[4699]: I0226 12:12:02.940622 4699 generic.go:334] "Generic (PLEG): container finished" podID="5ff3c3e0-4f58-401c-9f5f-b733727f73ff" containerID="747ddaa984d13eaf0f8ee9e7ae1b9299bffa91ea051e4eb23c1b1a2ab2aaf402" exitCode=0 Feb 26 12:12:02 crc kubenswrapper[4699]: I0226 12:12:02.940695 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535132-hr4rf" event={"ID":"5ff3c3e0-4f58-401c-9f5f-b733727f73ff","Type":"ContainerDied","Data":"747ddaa984d13eaf0f8ee9e7ae1b9299bffa91ea051e4eb23c1b1a2ab2aaf402"} Feb 26 12:12:04 crc kubenswrapper[4699]: I0226 12:12:04.287192 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535132-hr4rf" Feb 26 12:12:04 crc kubenswrapper[4699]: I0226 12:12:04.431368 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kshk8\" (UniqueName: \"kubernetes.io/projected/5ff3c3e0-4f58-401c-9f5f-b733727f73ff-kube-api-access-kshk8\") pod \"5ff3c3e0-4f58-401c-9f5f-b733727f73ff\" (UID: \"5ff3c3e0-4f58-401c-9f5f-b733727f73ff\") " Feb 26 12:12:04 crc kubenswrapper[4699]: I0226 12:12:04.440008 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff3c3e0-4f58-401c-9f5f-b733727f73ff-kube-api-access-kshk8" (OuterVolumeSpecName: "kube-api-access-kshk8") pod "5ff3c3e0-4f58-401c-9f5f-b733727f73ff" (UID: "5ff3c3e0-4f58-401c-9f5f-b733727f73ff"). InnerVolumeSpecName "kube-api-access-kshk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:12:04 crc kubenswrapper[4699]: I0226 12:12:04.534343 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kshk8\" (UniqueName: \"kubernetes.io/projected/5ff3c3e0-4f58-401c-9f5f-b733727f73ff-kube-api-access-kshk8\") on node \"crc\" DevicePath \"\"" Feb 26 12:12:04 crc kubenswrapper[4699]: I0226 12:12:04.962011 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535132-hr4rf" event={"ID":"5ff3c3e0-4f58-401c-9f5f-b733727f73ff","Type":"ContainerDied","Data":"c74e194aee577dd844b3d47a42b28777ae42ce708f0b112dd1f7c9633e886051"} Feb 26 12:12:04 crc kubenswrapper[4699]: I0226 12:12:04.962053 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c74e194aee577dd844b3d47a42b28777ae42ce708f0b112dd1f7c9633e886051" Feb 26 12:12:04 crc kubenswrapper[4699]: I0226 12:12:04.962105 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535132-hr4rf" Feb 26 12:12:05 crc kubenswrapper[4699]: I0226 12:12:05.358785 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535126-n7gpm"] Feb 26 12:12:05 crc kubenswrapper[4699]: I0226 12:12:05.367860 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535126-n7gpm"] Feb 26 12:12:06 crc kubenswrapper[4699]: I0226 12:12:06.274751 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d62b893f-dc84-4f3a-9c62-5c49c65be99f" path="/var/lib/kubelet/pods/d62b893f-dc84-4f3a-9c62-5c49c65be99f/volumes" Feb 26 12:12:10 crc kubenswrapper[4699]: I0226 12:12:10.636404 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-4k4sm_07c2552c-8182-4cfe-a397-39ad287029e5/manager/0.log" Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.070998 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/util/0.log" Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.279820 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/util/0.log" Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.306520 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/pull/0.log" Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.478494 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/pull/0.log" Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.584625 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.584682 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.584731 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.585495 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.585543 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" gracePeriod=600 Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.653510 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/util/0.log" Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.691770 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/pull/0.log" Feb 26 12:12:11 crc kubenswrapper[4699]: E0226 12:12:11.707054 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.817207 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-xw85z_35555f68-d5c4-44b2-9dfa-af5f91f57c7c/manager/0.log" Feb 26 12:12:11 crc kubenswrapper[4699]: I0226 12:12:11.845019 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/extract/0.log" Feb 26 12:12:12 crc kubenswrapper[4699]: I0226 12:12:12.110440 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-t8c9f_7b204025-d5ff-4c74-96b9-6774b62e0cc4/manager/0.log" Feb 26 12:12:12 crc kubenswrapper[4699]: I0226 12:12:12.179938 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-jh7vz_27e251bb-8f9b-48d4-9ea3-81d03fd85244/manager/0.log" Feb 26 12:12:12 crc kubenswrapper[4699]: I0226 12:12:12.325690 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" exitCode=0 Feb 26 12:12:12 crc kubenswrapper[4699]: I0226 12:12:12.325737 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7"} Feb 26 12:12:12 crc kubenswrapper[4699]: I0226 12:12:12.325775 4699 scope.go:117] "RemoveContainer" containerID="243333360a6594f8acbe71e1e9197448e74ac1a0258671779fb6af974ca032dd" Feb 26 12:12:12 crc kubenswrapper[4699]: I0226 12:12:12.326604 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:12:12 crc kubenswrapper[4699]: E0226 12:12:12.326926 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:12:12 crc kubenswrapper[4699]: I0226 12:12:12.385179 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-qf9vd_619dff06-7255-4aab-9ffe-9f2561bcc904/manager/0.log" Feb 26 12:12:12 crc kubenswrapper[4699]: I0226 12:12:12.658499 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-5k85p_d56efcbf-3414-4bd1-9cbf-d56c434ac529/manager/0.log" Feb 26 12:12:12 crc kubenswrapper[4699]: I0226 12:12:12.951921 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-d2pxc_a2c419ab-2a99-4d37-b46c-b84024f24b2e/manager/0.log" Feb 26 12:12:12 crc kubenswrapper[4699]: I0226 12:12:12.979924 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-mtrs6_afbeb2d8-c332-447b-a931-9fe7b246914d/manager/0.log" Feb 26 12:12:13 crc kubenswrapper[4699]: I0226 12:12:13.146035 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-9gwwj_caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2/manager/0.log" Feb 26 12:12:13 crc kubenswrapper[4699]: I0226 12:12:13.323517 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-95whc_38eef260-c32f-4568-9936-6197ba984f05/manager/0.log" Feb 26 12:12:13 crc kubenswrapper[4699]: I0226 12:12:13.496952 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-6gblm_54959b79-361c-415a-986d-1af6d8eb6701/manager/0.log" Feb 26 12:12:13 crc kubenswrapper[4699]: I0226 12:12:13.755807 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-2wj2n_a6e7ca85-e18b-4605-9180-316f65b82006/manager/0.log" Feb 26 12:12:13 crc kubenswrapper[4699]: I0226 12:12:13.763074 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-4mghs_0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee/manager/0.log" Feb 26 12:12:13 crc kubenswrapper[4699]: I0226 12:12:13.962696 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb_ce7c40ca-05ad-49ca-a091-02ac588c3eb7/manager/0.log" Feb 26 12:12:14 crc kubenswrapper[4699]: I0226 12:12:14.437321 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7c5cc54f9c-wjrrd_3a6d1210-ece5-4666-80bf-c7c7821e441c/operator/0.log" Feb 26 12:12:14 crc kubenswrapper[4699]: I0226 12:12:14.574347 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gmh8j_22cfe789-87ae-4b23-91c2-cbb5112e4285/registry-server/0.log" Feb 26 12:12:14 crc kubenswrapper[4699]: I0226 12:12:14.895775 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-96png_a90c4025-7bd1-401b-8f92-5f15a58fb3d6/manager/0.log" Feb 26 12:12:15 crc kubenswrapper[4699]: I0226 12:12:15.050357 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-jxr77_7545763d-d2d2-4b6e-980d-737062f0a894/manager/0.log" Feb 26 12:12:15 crc kubenswrapper[4699]: I0226 12:12:15.176292 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ghqf4_8d440653-f1c3-483c-a37d-463dcfc15224/operator/0.log" Feb 26 12:12:15 crc kubenswrapper[4699]: I0226 12:12:15.305371 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-bqvxr_33fc0a61-18c9-4e80-b898-92a5b1b71dac/manager/0.log" Feb 26 12:12:15 crc kubenswrapper[4699]: I0226 12:12:15.542309 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-f9kz5_15255a9b-0767-4518-8e81-ca9044f9190a/manager/0.log" Feb 26 12:12:15 crc kubenswrapper[4699]: I0226 12:12:15.609458 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-mwvnr_5be0c14a-e51f-4b69-ab58-c0cac66910e2/manager/0.log" Feb 26 12:12:15 crc kubenswrapper[4699]: I0226 12:12:15.788972 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-fnnc7_a2b3bf3b-a815-4033-983b-eedc16b8609f/manager/0.log" Feb 26 12:12:16 crc kubenswrapper[4699]: I0226 12:12:16.003906 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-947f4f86b-m69sv_ebf1a568-be30-4ceb-bc67-e3158a0280b9/manager/0.log" Feb 26 12:12:16 crc kubenswrapper[4699]: I0226 12:12:16.885782 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-sndb9_1814471e-5f82-4464-9528-75da66d7235b/manager/0.log" Feb 26 12:12:25 crc kubenswrapper[4699]: I0226 12:12:25.260700 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:12:25 crc kubenswrapper[4699]: E0226 12:12:25.262529 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:12:35 crc kubenswrapper[4699]: I0226 12:12:35.098168 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-p9wj4_bad776f4-e24b-41f1-88d8-2b1fe6258783/control-plane-machine-set-operator/0.log" Feb 26 12:12:35 crc kubenswrapper[4699]: I0226 12:12:35.305310 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pw64v_5d015dd8-56c9-4f61-b133-4951cda91ca5/machine-api-operator/0.log" Feb 26 12:12:35 crc kubenswrapper[4699]: I0226 12:12:35.345901 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pw64v_5d015dd8-56c9-4f61-b133-4951cda91ca5/kube-rbac-proxy/0.log" Feb 26 12:12:37 crc kubenswrapper[4699]: I0226 12:12:37.261389 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:12:37 crc kubenswrapper[4699]: E0226 12:12:37.262053 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:12:39 crc kubenswrapper[4699]: I0226 12:12:39.818522 4699 scope.go:117] "RemoveContainer" containerID="0a9b5f9a5f2d730b937d8d7362f22b7e6fe3edad8ecb5523a71d611f339c4a8e" Feb 26 12:12:47 crc kubenswrapper[4699]: I0226 12:12:47.477706 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-fhn2n_fc42522b-c5f4-4df2-8435-3e3985dd960c/cert-manager-controller/0.log" Feb 26 12:12:47 crc kubenswrapper[4699]: I0226 12:12:47.650935 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-dswxp_f026799a-39c7-443e-9801-f046ba8ae94b/cert-manager-cainjector/0.log" Feb 26 12:12:47 crc kubenswrapper[4699]: I0226 12:12:47.717272 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-l2fdt_fad1f923-b22c-4c0d-9eb9-684636bc76c0/cert-manager-webhook/0.log" Feb 26 12:12:52 crc kubenswrapper[4699]: I0226 12:12:52.262581 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:12:52 crc kubenswrapper[4699]: E0226 12:12:52.263707 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:12:59 crc kubenswrapper[4699]: I0226 12:12:59.392841 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-7f4bx_13fc1aa0-a043-4b42-952b-7f718ff577d2/nmstate-console-plugin/0.log" Feb 26 12:12:59 crc kubenswrapper[4699]: I0226 12:12:59.579535 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5jrwg_80de38f0-8620-4e27-988e-6d85d7c8bc24/nmstate-handler/0.log" Feb 26 12:12:59 crc kubenswrapper[4699]: I0226 12:12:59.625265 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-jnrsc_c4897df9-3a79-41bf-a7ba-7a72d888f8e1/kube-rbac-proxy/0.log" Feb 26 12:12:59 crc kubenswrapper[4699]: I0226 12:12:59.678641 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-jnrsc_c4897df9-3a79-41bf-a7ba-7a72d888f8e1/nmstate-metrics/0.log" Feb 26 12:12:59 crc kubenswrapper[4699]: I0226 12:12:59.776250 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-8l8n8_15312afe-49aa-4681-8513-6ed9c774d222/nmstate-operator/0.log" Feb 26 12:12:59 crc kubenswrapper[4699]: I0226 12:12:59.881418 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-qmw66_d674e733-7357-43e5-be9c-4d4e9bad252c/nmstate-webhook/0.log" Feb 26 12:13:06 crc kubenswrapper[4699]: I0226 12:13:06.268130 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:13:06 crc kubenswrapper[4699]: E0226 12:13:06.269002 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:13:17 crc kubenswrapper[4699]: I0226 12:13:17.260389 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:13:17 crc kubenswrapper[4699]: E0226 12:13:17.261416 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:13:24 crc kubenswrapper[4699]: I0226 12:13:24.707935 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-bs5nk_6ef6a9d7-6997-485a-a812-ded9d3a2df85/kube-rbac-proxy/0.log" Feb 26 12:13:24 crc kubenswrapper[4699]: I0226 12:13:24.761653 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-bs5nk_6ef6a9d7-6997-485a-a812-ded9d3a2df85/controller/0.log" Feb 26 12:13:24 crc kubenswrapper[4699]: I0226 12:13:24.915683 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-svsrb_35357e2c-2a03-46f8-bc28-f7daad3b679d/frr-k8s-webhook-server/0.log" Feb 26 12:13:24 crc kubenswrapper[4699]: I0226 12:13:24.997239 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-frr-files/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.117277 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-metrics/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.138151 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-frr-files/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.143939 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-reloader/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.186943 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-reloader/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.376966 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-metrics/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.394648 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-reloader/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.395812 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-frr-files/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.411040 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-metrics/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.572476 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-frr-files/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.575407 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-reloader/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.586097 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/controller/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.599274 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-metrics/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.750497 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/frr-metrics/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.785489 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/kube-rbac-proxy-frr/0.log" Feb 26 12:13:25 crc kubenswrapper[4699]: I0226 12:13:25.785491 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/kube-rbac-proxy/0.log" Feb 26 12:13:26 crc kubenswrapper[4699]: I0226 12:13:26.007157 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/reloader/0.log" Feb 26 12:13:26 crc kubenswrapper[4699]: I0226 12:13:26.019869 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5d58b8658b-qjr5b_cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8/manager/0.log" Feb 26 12:13:26 crc kubenswrapper[4699]: I0226 12:13:26.210183 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6d98597f89-glkjh_af2438c1-8812-4bb1-8999-66cb8d804c05/webhook-server/0.log" Feb 26 12:13:26 crc kubenswrapper[4699]: I0226 12:13:26.453958 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l8phj_d656ca89-f955-44bb-9944-f75bf485a254/kube-rbac-proxy/0.log" Feb 26 12:13:27 crc kubenswrapper[4699]: I0226 12:13:27.001676 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l8phj_d656ca89-f955-44bb-9944-f75bf485a254/speaker/0.log" Feb 26 12:13:27 crc kubenswrapper[4699]: I0226 12:13:27.490916 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/frr/0.log" Feb 26 12:13:31 crc kubenswrapper[4699]: I0226 12:13:31.261554 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:13:31 crc kubenswrapper[4699]: E0226 12:13:31.262295 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:13:38 crc kubenswrapper[4699]: I0226 12:13:38.675225 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/util/0.log" Feb 26 12:13:38 crc kubenswrapper[4699]: I0226 12:13:38.802395 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/util/0.log" Feb 26 12:13:38 crc kubenswrapper[4699]: I0226 12:13:38.836154 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/pull/0.log" Feb 26 12:13:38 crc kubenswrapper[4699]: I0226 12:13:38.871550 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/pull/0.log" Feb 26 12:13:39 crc kubenswrapper[4699]: I0226 12:13:39.080224 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/util/0.log" Feb 26 12:13:39 crc kubenswrapper[4699]: I0226 12:13:39.081621 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/pull/0.log" Feb 26 12:13:39 crc kubenswrapper[4699]: I0226 12:13:39.088387 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/extract/0.log" Feb 26 12:13:39 crc kubenswrapper[4699]: I0226 12:13:39.225679 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-utilities/0.log" Feb 26 12:13:39 crc kubenswrapper[4699]: I0226 12:13:39.379486 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-content/0.log" Feb 26 12:13:39 crc kubenswrapper[4699]: I0226 12:13:39.402313 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-content/0.log" Feb 26 12:13:39 crc kubenswrapper[4699]: I0226 12:13:39.413159 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-utilities/0.log" Feb 26 12:13:39 crc kubenswrapper[4699]: I0226 12:13:39.577756 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-content/0.log" Feb 26 12:13:39 crc kubenswrapper[4699]: I0226 12:13:39.585499 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-utilities/0.log" Feb 26 12:13:39 crc kubenswrapper[4699]: I0226 12:13:39.799131 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-utilities/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.036488 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/registry-server/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.037319 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-content/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.056433 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-utilities/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.071553 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-content/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.207662 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-utilities/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.241599 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-content/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.452774 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/util/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.637023 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/pull/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.641026 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/util/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.662572 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/registry-server/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.672278 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/pull/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.827510 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/util/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.840896 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/pull/0.log" Feb 26 12:13:40 crc kubenswrapper[4699]: I0226 12:13:40.910602 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/extract/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.007890 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nwbkq_43a980f6-1eff-4610-aa3e-69729c3eb7c7/marketplace-operator/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.039522 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-utilities/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.248384 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-utilities/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.252831 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-content/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.271269 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-content/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.466141 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-content/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.485826 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-utilities/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.586130 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/registry-server/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.735420 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-utilities/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.830308 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-utilities/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.833786 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-content/0.log" Feb 26 12:13:41 crc kubenswrapper[4699]: I0226 12:13:41.851425 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-content/0.log" Feb 26 12:13:42 crc kubenswrapper[4699]: I0226 12:13:42.007348 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-utilities/0.log" Feb 26 12:13:42 crc kubenswrapper[4699]: I0226 12:13:42.014996 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-content/0.log" Feb 26 12:13:42 crc kubenswrapper[4699]: I0226 12:13:42.418939 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/registry-server/0.log" Feb 26 12:13:46 crc kubenswrapper[4699]: I0226 12:13:46.267365 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:13:46 crc kubenswrapper[4699]: E0226 12:13:46.267633 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:13:57 crc kubenswrapper[4699]: I0226 12:13:57.261445 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:13:57 crc kubenswrapper[4699]: E0226 12:13:57.262282 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.143080 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535134-bh8fn"] Feb 26 12:14:00 crc kubenswrapper[4699]: E0226 12:14:00.148770 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff3c3e0-4f58-401c-9f5f-b733727f73ff" containerName="oc" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.148794 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff3c3e0-4f58-401c-9f5f-b733727f73ff" containerName="oc" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.149053 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff3c3e0-4f58-401c-9f5f-b733727f73ff" containerName="oc" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.149848 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535134-bh8fn" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.154362 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.154563 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.154720 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.171432 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535134-bh8fn"] Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.331258 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpvg5\" (UniqueName: \"kubernetes.io/projected/916cd984-33ed-4299-ade5-5064478d656f-kube-api-access-qpvg5\") pod \"auto-csr-approver-29535134-bh8fn\" (UID: \"916cd984-33ed-4299-ade5-5064478d656f\") " pod="openshift-infra/auto-csr-approver-29535134-bh8fn" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.433144 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpvg5\" (UniqueName: \"kubernetes.io/projected/916cd984-33ed-4299-ade5-5064478d656f-kube-api-access-qpvg5\") pod \"auto-csr-approver-29535134-bh8fn\" (UID: \"916cd984-33ed-4299-ade5-5064478d656f\") " pod="openshift-infra/auto-csr-approver-29535134-bh8fn" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.450521 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpvg5\" (UniqueName: \"kubernetes.io/projected/916cd984-33ed-4299-ade5-5064478d656f-kube-api-access-qpvg5\") pod \"auto-csr-approver-29535134-bh8fn\" (UID: \"916cd984-33ed-4299-ade5-5064478d656f\") " pod="openshift-infra/auto-csr-approver-29535134-bh8fn" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.488714 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535134-bh8fn" Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.982994 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535134-bh8fn"] Feb 26 12:14:00 crc kubenswrapper[4699]: I0226 12:14:00.983498 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:14:01 crc kubenswrapper[4699]: I0226 12:14:01.374036 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535134-bh8fn" event={"ID":"916cd984-33ed-4299-ade5-5064478d656f","Type":"ContainerStarted","Data":"1c9da386567c1145e06dd7ab813fc07e39cefd0bde6d9e1a25317925d1acc515"} Feb 26 12:14:03 crc kubenswrapper[4699]: I0226 12:14:03.395480 4699 generic.go:334] "Generic (PLEG): container finished" podID="916cd984-33ed-4299-ade5-5064478d656f" containerID="ae1928085c149280cf3addf69107c792048518ecf95f2de337f2886f53e0e594" exitCode=0 Feb 26 12:14:03 crc kubenswrapper[4699]: I0226 12:14:03.395689 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535134-bh8fn" event={"ID":"916cd984-33ed-4299-ade5-5064478d656f","Type":"ContainerDied","Data":"ae1928085c149280cf3addf69107c792048518ecf95f2de337f2886f53e0e594"} Feb 26 12:14:05 crc kubenswrapper[4699]: I0226 12:14:05.042725 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535134-bh8fn" Feb 26 12:14:05 crc kubenswrapper[4699]: I0226 12:14:05.054374 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpvg5\" (UniqueName: \"kubernetes.io/projected/916cd984-33ed-4299-ade5-5064478d656f-kube-api-access-qpvg5\") pod \"916cd984-33ed-4299-ade5-5064478d656f\" (UID: \"916cd984-33ed-4299-ade5-5064478d656f\") " Feb 26 12:14:05 crc kubenswrapper[4699]: I0226 12:14:05.064608 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916cd984-33ed-4299-ade5-5064478d656f-kube-api-access-qpvg5" (OuterVolumeSpecName: "kube-api-access-qpvg5") pod "916cd984-33ed-4299-ade5-5064478d656f" (UID: "916cd984-33ed-4299-ade5-5064478d656f"). InnerVolumeSpecName "kube-api-access-qpvg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:14:05 crc kubenswrapper[4699]: I0226 12:14:05.157283 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpvg5\" (UniqueName: \"kubernetes.io/projected/916cd984-33ed-4299-ade5-5064478d656f-kube-api-access-qpvg5\") on node \"crc\" DevicePath \"\"" Feb 26 12:14:05 crc kubenswrapper[4699]: I0226 12:14:05.415374 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535134-bh8fn" event={"ID":"916cd984-33ed-4299-ade5-5064478d656f","Type":"ContainerDied","Data":"1c9da386567c1145e06dd7ab813fc07e39cefd0bde6d9e1a25317925d1acc515"} Feb 26 12:14:05 crc kubenswrapper[4699]: I0226 12:14:05.415415 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c9da386567c1145e06dd7ab813fc07e39cefd0bde6d9e1a25317925d1acc515" Feb 26 12:14:05 crc kubenswrapper[4699]: I0226 12:14:05.415469 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535134-bh8fn" Feb 26 12:14:06 crc kubenswrapper[4699]: I0226 12:14:06.114239 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535128-4vkb4"] Feb 26 12:14:06 crc kubenswrapper[4699]: I0226 12:14:06.132253 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535128-4vkb4"] Feb 26 12:14:06 crc kubenswrapper[4699]: I0226 12:14:06.288250 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e519986-41ca-4360-b9bd-14a485e9a635" path="/var/lib/kubelet/pods/0e519986-41ca-4360-b9bd-14a485e9a635/volumes" Feb 26 12:14:10 crc kubenswrapper[4699]: I0226 12:14:10.265035 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:14:10 crc kubenswrapper[4699]: E0226 12:14:10.265914 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:14:22 crc kubenswrapper[4699]: I0226 12:14:22.261488 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:14:22 crc kubenswrapper[4699]: E0226 12:14:22.262154 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:14:33 crc kubenswrapper[4699]: I0226 12:14:33.791778 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qwws6"] Feb 26 12:14:33 crc kubenswrapper[4699]: E0226 12:14:33.793063 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916cd984-33ed-4299-ade5-5064478d656f" containerName="oc" Feb 26 12:14:33 crc kubenswrapper[4699]: I0226 12:14:33.793085 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="916cd984-33ed-4299-ade5-5064478d656f" containerName="oc" Feb 26 12:14:33 crc kubenswrapper[4699]: I0226 12:14:33.793369 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="916cd984-33ed-4299-ade5-5064478d656f" containerName="oc" Feb 26 12:14:33 crc kubenswrapper[4699]: I0226 12:14:33.794697 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:33 crc kubenswrapper[4699]: I0226 12:14:33.806865 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qwws6"] Feb 26 12:14:33 crc kubenswrapper[4699]: I0226 12:14:33.949434 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wmj\" (UniqueName: \"kubernetes.io/projected/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-kube-api-access-v6wmj\") pod \"community-operators-qwws6\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:33 crc kubenswrapper[4699]: I0226 12:14:33.949850 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-utilities\") pod \"community-operators-qwws6\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:33 crc kubenswrapper[4699]: I0226 12:14:33.949978 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-catalog-content\") pod \"community-operators-qwws6\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:34 crc kubenswrapper[4699]: I0226 12:14:34.052010 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-utilities\") pod \"community-operators-qwws6\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:34 crc kubenswrapper[4699]: I0226 12:14:34.052073 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-catalog-content\") pod \"community-operators-qwws6\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:34 crc kubenswrapper[4699]: I0226 12:14:34.052133 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6wmj\" (UniqueName: \"kubernetes.io/projected/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-kube-api-access-v6wmj\") pod \"community-operators-qwws6\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:34 crc kubenswrapper[4699]: I0226 12:14:34.052990 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-utilities\") pod \"community-operators-qwws6\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:34 crc kubenswrapper[4699]: I0226 12:14:34.053074 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-catalog-content\") pod \"community-operators-qwws6\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:34 crc kubenswrapper[4699]: I0226 12:14:34.082508 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6wmj\" (UniqueName: \"kubernetes.io/projected/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-kube-api-access-v6wmj\") pod \"community-operators-qwws6\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:34 crc kubenswrapper[4699]: I0226 12:14:34.188945 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:34 crc kubenswrapper[4699]: I0226 12:14:34.679734 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qwws6"] Feb 26 12:14:35 crc kubenswrapper[4699]: I0226 12:14:35.260777 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:14:35 crc kubenswrapper[4699]: E0226 12:14:35.261273 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:14:35 crc kubenswrapper[4699]: I0226 12:14:35.697382 4699 generic.go:334] "Generic (PLEG): container finished" podID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" containerID="f22c7b5f9734f5bd9f782943baa7fd673b2c739b83f0f0acf9560a4f0b14c5c3" exitCode=0 Feb 26 12:14:35 crc kubenswrapper[4699]: I0226 12:14:35.698029 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwws6" event={"ID":"c99a1b65-dd7a-4d1c-a767-43eb7192dea7","Type":"ContainerDied","Data":"f22c7b5f9734f5bd9f782943baa7fd673b2c739b83f0f0acf9560a4f0b14c5c3"} Feb 26 12:14:35 crc kubenswrapper[4699]: I0226 12:14:35.698089 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwws6" event={"ID":"c99a1b65-dd7a-4d1c-a767-43eb7192dea7","Type":"ContainerStarted","Data":"89eafaeedd18474def84950473646ec600025a974d15625aadc38ccb3c651b4c"} Feb 26 12:14:36 crc kubenswrapper[4699]: I0226 12:14:36.708816 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwws6" event={"ID":"c99a1b65-dd7a-4d1c-a767-43eb7192dea7","Type":"ContainerStarted","Data":"03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1"} Feb 26 12:14:37 crc kubenswrapper[4699]: I0226 12:14:37.722897 4699 generic.go:334] "Generic (PLEG): container finished" podID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" containerID="03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1" exitCode=0 Feb 26 12:14:37 crc kubenswrapper[4699]: I0226 12:14:37.723278 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwws6" event={"ID":"c99a1b65-dd7a-4d1c-a767-43eb7192dea7","Type":"ContainerDied","Data":"03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1"} Feb 26 12:14:38 crc kubenswrapper[4699]: I0226 12:14:38.734454 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwws6" event={"ID":"c99a1b65-dd7a-4d1c-a767-43eb7192dea7","Type":"ContainerStarted","Data":"bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c"} Feb 26 12:14:38 crc kubenswrapper[4699]: I0226 12:14:38.765133 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qwws6" podStartSLOduration=3.333039178 podStartE2EDuration="5.765094784s" podCreationTimestamp="2026-02-26 12:14:33 +0000 UTC" firstStartedPulling="2026-02-26 12:14:35.701312013 +0000 UTC m=+3821.512138447" lastFinishedPulling="2026-02-26 12:14:38.133367599 +0000 UTC m=+3823.944194053" observedRunningTime="2026-02-26 12:14:38.757065047 +0000 UTC m=+3824.567891491" watchObservedRunningTime="2026-02-26 12:14:38.765094784 +0000 UTC m=+3824.575921218" Feb 26 12:14:39 crc kubenswrapper[4699]: I0226 12:14:39.915425 4699 scope.go:117] "RemoveContainer" containerID="edcce5d1b2431ea73d4d1a16900e65c51edf48fce3e10f865133733ba98e31ff" Feb 26 12:14:44 crc kubenswrapper[4699]: I0226 12:14:44.189686 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:44 crc kubenswrapper[4699]: I0226 12:14:44.190769 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:44 crc kubenswrapper[4699]: I0226 12:14:44.233110 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:44 crc kubenswrapper[4699]: I0226 12:14:44.858744 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:44 crc kubenswrapper[4699]: I0226 12:14:44.909503 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qwws6"] Feb 26 12:14:46 crc kubenswrapper[4699]: I0226 12:14:46.819597 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qwws6" podUID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" containerName="registry-server" containerID="cri-o://bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c" gracePeriod=2 Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.356706 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.533059 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-catalog-content\") pod \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.533502 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-utilities\") pod \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.533683 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6wmj\" (UniqueName: \"kubernetes.io/projected/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-kube-api-access-v6wmj\") pod \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\" (UID: \"c99a1b65-dd7a-4d1c-a767-43eb7192dea7\") " Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.534300 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-utilities" (OuterVolumeSpecName: "utilities") pod "c99a1b65-dd7a-4d1c-a767-43eb7192dea7" (UID: "c99a1b65-dd7a-4d1c-a767-43eb7192dea7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.534626 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.547874 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-kube-api-access-v6wmj" (OuterVolumeSpecName: "kube-api-access-v6wmj") pod "c99a1b65-dd7a-4d1c-a767-43eb7192dea7" (UID: "c99a1b65-dd7a-4d1c-a767-43eb7192dea7"). InnerVolumeSpecName "kube-api-access-v6wmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.619437 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c99a1b65-dd7a-4d1c-a767-43eb7192dea7" (UID: "c99a1b65-dd7a-4d1c-a767-43eb7192dea7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.636569 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.636619 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6wmj\" (UniqueName: \"kubernetes.io/projected/c99a1b65-dd7a-4d1c-a767-43eb7192dea7-kube-api-access-v6wmj\") on node \"crc\" DevicePath \"\"" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.827957 4699 generic.go:334] "Generic (PLEG): container finished" podID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" containerID="bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c" exitCode=0 Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.828007 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwws6" event={"ID":"c99a1b65-dd7a-4d1c-a767-43eb7192dea7","Type":"ContainerDied","Data":"bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c"} Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.828042 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qwws6" event={"ID":"c99a1b65-dd7a-4d1c-a767-43eb7192dea7","Type":"ContainerDied","Data":"89eafaeedd18474def84950473646ec600025a974d15625aadc38ccb3c651b4c"} Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.828047 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qwws6" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.828062 4699 scope.go:117] "RemoveContainer" containerID="bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.864443 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qwws6"] Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.869874 4699 scope.go:117] "RemoveContainer" containerID="03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.876803 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qwws6"] Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.894880 4699 scope.go:117] "RemoveContainer" containerID="f22c7b5f9734f5bd9f782943baa7fd673b2c739b83f0f0acf9560a4f0b14c5c3" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.930270 4699 scope.go:117] "RemoveContainer" containerID="bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c" Feb 26 12:14:47 crc kubenswrapper[4699]: E0226 12:14:47.930704 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c\": container with ID starting with bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c not found: ID does not exist" containerID="bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.930770 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c"} err="failed to get container status \"bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c\": rpc error: code = NotFound desc = could not find container \"bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c\": container with ID starting with bb353a6cefd1490bc9489650a34b8b94c97dc0a81961d477aa68b76700f3d58c not found: ID does not exist" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.930799 4699 scope.go:117] "RemoveContainer" containerID="03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1" Feb 26 12:14:47 crc kubenswrapper[4699]: E0226 12:14:47.931088 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1\": container with ID starting with 03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1 not found: ID does not exist" containerID="03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.931141 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1"} err="failed to get container status \"03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1\": rpc error: code = NotFound desc = could not find container \"03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1\": container with ID starting with 03997dd7e5a08e62d93a69b33ec106072b174646b82c2a64e9e509952ea5dbd1 not found: ID does not exist" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.931169 4699 scope.go:117] "RemoveContainer" containerID="f22c7b5f9734f5bd9f782943baa7fd673b2c739b83f0f0acf9560a4f0b14c5c3" Feb 26 12:14:47 crc kubenswrapper[4699]: E0226 12:14:47.931580 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22c7b5f9734f5bd9f782943baa7fd673b2c739b83f0f0acf9560a4f0b14c5c3\": container with ID starting with f22c7b5f9734f5bd9f782943baa7fd673b2c739b83f0f0acf9560a4f0b14c5c3 not found: ID does not exist" containerID="f22c7b5f9734f5bd9f782943baa7fd673b2c739b83f0f0acf9560a4f0b14c5c3" Feb 26 12:14:47 crc kubenswrapper[4699]: I0226 12:14:47.931611 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22c7b5f9734f5bd9f782943baa7fd673b2c739b83f0f0acf9560a4f0b14c5c3"} err="failed to get container status \"f22c7b5f9734f5bd9f782943baa7fd673b2c739b83f0f0acf9560a4f0b14c5c3\": rpc error: code = NotFound desc = could not find container \"f22c7b5f9734f5bd9f782943baa7fd673b2c739b83f0f0acf9560a4f0b14c5c3\": container with ID starting with f22c7b5f9734f5bd9f782943baa7fd673b2c739b83f0f0acf9560a4f0b14c5c3 not found: ID does not exist" Feb 26 12:14:48 crc kubenswrapper[4699]: I0226 12:14:48.262197 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:14:48 crc kubenswrapper[4699]: E0226 12:14:48.262414 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:14:48 crc kubenswrapper[4699]: I0226 12:14:48.272028 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" path="/var/lib/kubelet/pods/c99a1b65-dd7a-4d1c-a767-43eb7192dea7/volumes" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.150437 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb"] Feb 26 12:15:00 crc kubenswrapper[4699]: E0226 12:15:00.151553 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" containerName="extract-utilities" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.151570 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" containerName="extract-utilities" Feb 26 12:15:00 crc kubenswrapper[4699]: E0226 12:15:00.151586 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" containerName="registry-server" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.151592 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" containerName="registry-server" Feb 26 12:15:00 crc kubenswrapper[4699]: E0226 12:15:00.151604 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" containerName="extract-content" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.151609 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" containerName="extract-content" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.151794 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99a1b65-dd7a-4d1c-a767-43eb7192dea7" containerName="registry-server" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.152624 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.155848 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.156181 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.160648 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb"] Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.321424 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-secret-volume\") pod \"collect-profiles-29535135-wjstb\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.321498 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2q9m\" (UniqueName: \"kubernetes.io/projected/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-kube-api-access-z2q9m\") pod \"collect-profiles-29535135-wjstb\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.321563 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-config-volume\") pod \"collect-profiles-29535135-wjstb\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.422727 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2q9m\" (UniqueName: \"kubernetes.io/projected/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-kube-api-access-z2q9m\") pod \"collect-profiles-29535135-wjstb\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.422816 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-config-volume\") pod \"collect-profiles-29535135-wjstb\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.422944 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-secret-volume\") pod \"collect-profiles-29535135-wjstb\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.424223 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-config-volume\") pod \"collect-profiles-29535135-wjstb\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.429250 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-secret-volume\") pod \"collect-profiles-29535135-wjstb\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.440008 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2q9m\" (UniqueName: \"kubernetes.io/projected/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-kube-api-access-z2q9m\") pod \"collect-profiles-29535135-wjstb\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:00 crc kubenswrapper[4699]: I0226 12:15:00.510085 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:01 crc kubenswrapper[4699]: I0226 12:15:01.006522 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb"] Feb 26 12:15:01 crc kubenswrapper[4699]: I0226 12:15:01.945329 4699 generic.go:334] "Generic (PLEG): container finished" podID="a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7" containerID="dcd2b4d3e7e82abee5eb779094bdc847007f7d64e831426aba90d3a867cacda2" exitCode=0 Feb 26 12:15:01 crc kubenswrapper[4699]: I0226 12:15:01.945519 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" event={"ID":"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7","Type":"ContainerDied","Data":"dcd2b4d3e7e82abee5eb779094bdc847007f7d64e831426aba90d3a867cacda2"} Feb 26 12:15:01 crc kubenswrapper[4699]: I0226 12:15:01.946659 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" event={"ID":"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7","Type":"ContainerStarted","Data":"baa44810e6541d95a944872949381ca67f1a661fa4ff50b10dcb9803f9c72471"} Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.262455 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:15:03 crc kubenswrapper[4699]: E0226 12:15:03.263254 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.274516 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.398473 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2q9m\" (UniqueName: \"kubernetes.io/projected/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-kube-api-access-z2q9m\") pod \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.398908 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-secret-volume\") pod \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.399083 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-config-volume\") pod \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\" (UID: \"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7\") " Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.399948 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-config-volume" (OuterVolumeSpecName: "config-volume") pod "a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7" (UID: "a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.402255 4699 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.407256 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7" (UID: "a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.407462 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-kube-api-access-z2q9m" (OuterVolumeSpecName: "kube-api-access-z2q9m") pod "a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7" (UID: "a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7"). InnerVolumeSpecName "kube-api-access-z2q9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.504523 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2q9m\" (UniqueName: \"kubernetes.io/projected/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-kube-api-access-z2q9m\") on node \"crc\" DevicePath \"\"" Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.504583 4699 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.965040 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" event={"ID":"a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7","Type":"ContainerDied","Data":"baa44810e6541d95a944872949381ca67f1a661fa4ff50b10dcb9803f9c72471"} Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.965537 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baa44810e6541d95a944872949381ca67f1a661fa4ff50b10dcb9803f9c72471" Feb 26 12:15:03 crc kubenswrapper[4699]: I0226 12:15:03.965251 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-wjstb" Feb 26 12:15:04 crc kubenswrapper[4699]: I0226 12:15:04.383574 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj"] Feb 26 12:15:04 crc kubenswrapper[4699]: I0226 12:15:04.391404 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535090-n42nj"] Feb 26 12:15:06 crc kubenswrapper[4699]: I0226 12:15:06.279396 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b298a96-eca9-49eb-a547-f88e986f326e" path="/var/lib/kubelet/pods/9b298a96-eca9-49eb-a547-f88e986f326e/volumes" Feb 26 12:15:14 crc kubenswrapper[4699]: I0226 12:15:14.260944 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:15:14 crc kubenswrapper[4699]: E0226 12:15:14.261738 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:15:28 crc kubenswrapper[4699]: I0226 12:15:28.261967 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:15:28 crc kubenswrapper[4699]: E0226 12:15:28.262853 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:15:31 crc kubenswrapper[4699]: I0226 12:15:31.252412 4699 generic.go:334] "Generic (PLEG): container finished" podID="2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" containerID="69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb" exitCode=0 Feb 26 12:15:31 crc kubenswrapper[4699]: I0226 12:15:31.252512 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4plh5/must-gather-cwqbr" event={"ID":"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9","Type":"ContainerDied","Data":"69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb"} Feb 26 12:15:31 crc kubenswrapper[4699]: I0226 12:15:31.253566 4699 scope.go:117] "RemoveContainer" containerID="69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb" Feb 26 12:15:32 crc kubenswrapper[4699]: I0226 12:15:32.053521 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4plh5_must-gather-cwqbr_2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9/gather/0.log" Feb 26 12:15:39 crc kubenswrapper[4699]: I0226 12:15:39.558598 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4plh5/must-gather-cwqbr"] Feb 26 12:15:39 crc kubenswrapper[4699]: I0226 12:15:39.559287 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4plh5/must-gather-cwqbr" podUID="2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" containerName="copy" containerID="cri-o://40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd" gracePeriod=2 Feb 26 12:15:39 crc kubenswrapper[4699]: I0226 12:15:39.567196 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4plh5/must-gather-cwqbr"] Feb 26 12:15:39 crc kubenswrapper[4699]: I0226 12:15:39.998518 4699 scope.go:117] "RemoveContainer" containerID="81dc18175a458a0d1e57583f805b2614af5b4f06183622336860874df0cedc4e" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.092770 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4plh5_must-gather-cwqbr_2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9/copy/0.log" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.093612 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/must-gather-cwqbr" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.260692 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:15:40 crc kubenswrapper[4699]: E0226 12:15:40.261299 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.271881 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p96w\" (UniqueName: \"kubernetes.io/projected/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-kube-api-access-8p96w\") pod \"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9\" (UID: \"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9\") " Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.272270 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-must-gather-output\") pod \"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9\" (UID: \"2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9\") " Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.277236 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-kube-api-access-8p96w" (OuterVolumeSpecName: "kube-api-access-8p96w") pod "2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" (UID: "2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9"). InnerVolumeSpecName "kube-api-access-8p96w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.374988 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p96w\" (UniqueName: \"kubernetes.io/projected/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-kube-api-access-8p96w\") on node \"crc\" DevicePath \"\"" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.441566 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" (UID: "2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.450669 4699 generic.go:334] "Generic (PLEG): container finished" podID="2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" containerID="40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd" exitCode=143 Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.450781 4699 scope.go:117] "RemoveContainer" containerID="40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.450832 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4plh5/must-gather-cwqbr" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.475811 4699 scope.go:117] "RemoveContainer" containerID="69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.477050 4699 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.809713 4699 scope.go:117] "RemoveContainer" containerID="40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd" Feb 26 12:15:40 crc kubenswrapper[4699]: E0226 12:15:40.821166 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd\": container with ID starting with 40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd not found: ID does not exist" containerID="40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.821216 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd"} err="failed to get container status \"40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd\": rpc error: code = NotFound desc = could not find container \"40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd\": container with ID starting with 40c5e22b0608cb2076c35bbaefac8f43550651c9e232dc9f005649022a951ddd not found: ID does not exist" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.821246 4699 scope.go:117] "RemoveContainer" containerID="69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb" Feb 26 12:15:40 crc kubenswrapper[4699]: E0226 12:15:40.824541 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb\": container with ID starting with 69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb not found: ID does not exist" containerID="69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb" Feb 26 12:15:40 crc kubenswrapper[4699]: I0226 12:15:40.824592 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb"} err="failed to get container status \"69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb\": rpc error: code = NotFound desc = could not find container \"69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb\": container with ID starting with 69186eaeffdff5cb7d043fe084a54702933857d82206e64b726b17a9aa98bafb not found: ID does not exist" Feb 26 12:15:42 crc kubenswrapper[4699]: I0226 12:15:42.272019 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" path="/var/lib/kubelet/pods/2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9/volumes" Feb 26 12:15:51 crc kubenswrapper[4699]: I0226 12:15:51.260577 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:15:51 crc kubenswrapper[4699]: E0226 12:15:51.261230 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.148156 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535136-zr5lr"] Feb 26 12:16:00 crc kubenswrapper[4699]: E0226 12:16:00.149724 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" containerName="gather" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.149747 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" containerName="gather" Feb 26 12:16:00 crc kubenswrapper[4699]: E0226 12:16:00.149783 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" containerName="copy" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.149791 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" containerName="copy" Feb 26 12:16:00 crc kubenswrapper[4699]: E0226 12:16:00.149814 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7" containerName="collect-profiles" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.149823 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7" containerName="collect-profiles" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.150095 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" containerName="gather" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.150159 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6e42cb-6891-4f97-9ba8-b4c6ad63a7a9" containerName="copy" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.150177 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30d5384-a2b7-4e4b-8bf5-1bb62cdb82f7" containerName="collect-profiles" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.151028 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535136-zr5lr" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.153985 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.156094 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.164525 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.177531 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535136-zr5lr"] Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.267739 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgb5d\" (UniqueName: \"kubernetes.io/projected/502aea63-b1be-4c9e-850b-bc5a2503b628-kube-api-access-fgb5d\") pod \"auto-csr-approver-29535136-zr5lr\" (UID: \"502aea63-b1be-4c9e-850b-bc5a2503b628\") " pod="openshift-infra/auto-csr-approver-29535136-zr5lr" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.369909 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgb5d\" (UniqueName: \"kubernetes.io/projected/502aea63-b1be-4c9e-850b-bc5a2503b628-kube-api-access-fgb5d\") pod \"auto-csr-approver-29535136-zr5lr\" (UID: \"502aea63-b1be-4c9e-850b-bc5a2503b628\") " pod="openshift-infra/auto-csr-approver-29535136-zr5lr" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.387904 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgb5d\" (UniqueName: \"kubernetes.io/projected/502aea63-b1be-4c9e-850b-bc5a2503b628-kube-api-access-fgb5d\") pod \"auto-csr-approver-29535136-zr5lr\" (UID: \"502aea63-b1be-4c9e-850b-bc5a2503b628\") " pod="openshift-infra/auto-csr-approver-29535136-zr5lr" Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.517842 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535136-zr5lr" Feb 26 12:16:00 crc kubenswrapper[4699]: W0226 12:16:00.940808 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod502aea63_b1be_4c9e_850b_bc5a2503b628.slice/crio-a3680e8d9d354f766a8d1a4052690172c397bfc0e5fac85d7ed29e97ed123cae WatchSource:0}: Error finding container a3680e8d9d354f766a8d1a4052690172c397bfc0e5fac85d7ed29e97ed123cae: Status 404 returned error can't find the container with id a3680e8d9d354f766a8d1a4052690172c397bfc0e5fac85d7ed29e97ed123cae Feb 26 12:16:00 crc kubenswrapper[4699]: I0226 12:16:00.944306 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535136-zr5lr"] Feb 26 12:16:01 crc kubenswrapper[4699]: I0226 12:16:01.652379 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535136-zr5lr" event={"ID":"502aea63-b1be-4c9e-850b-bc5a2503b628","Type":"ContainerStarted","Data":"a3680e8d9d354f766a8d1a4052690172c397bfc0e5fac85d7ed29e97ed123cae"} Feb 26 12:16:02 crc kubenswrapper[4699]: I0226 12:16:02.667266 4699 generic.go:334] "Generic (PLEG): container finished" podID="502aea63-b1be-4c9e-850b-bc5a2503b628" containerID="6e828c6eb232b14fedfc4161c27c5a5dd3b91bd1fe215ef080f8deb69fce1e31" exitCode=0 Feb 26 12:16:02 crc kubenswrapper[4699]: I0226 12:16:02.667414 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535136-zr5lr" event={"ID":"502aea63-b1be-4c9e-850b-bc5a2503b628","Type":"ContainerDied","Data":"6e828c6eb232b14fedfc4161c27c5a5dd3b91bd1fe215ef080f8deb69fce1e31"} Feb 26 12:16:04 crc kubenswrapper[4699]: I0226 12:16:04.072667 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535136-zr5lr" Feb 26 12:16:04 crc kubenswrapper[4699]: I0226 12:16:04.246681 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgb5d\" (UniqueName: \"kubernetes.io/projected/502aea63-b1be-4c9e-850b-bc5a2503b628-kube-api-access-fgb5d\") pod \"502aea63-b1be-4c9e-850b-bc5a2503b628\" (UID: \"502aea63-b1be-4c9e-850b-bc5a2503b628\") " Feb 26 12:16:04 crc kubenswrapper[4699]: I0226 12:16:04.254821 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502aea63-b1be-4c9e-850b-bc5a2503b628-kube-api-access-fgb5d" (OuterVolumeSpecName: "kube-api-access-fgb5d") pod "502aea63-b1be-4c9e-850b-bc5a2503b628" (UID: "502aea63-b1be-4c9e-850b-bc5a2503b628"). InnerVolumeSpecName "kube-api-access-fgb5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:16:04 crc kubenswrapper[4699]: I0226 12:16:04.261692 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:16:04 crc kubenswrapper[4699]: E0226 12:16:04.262056 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:16:04 crc kubenswrapper[4699]: I0226 12:16:04.349084 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgb5d\" (UniqueName: \"kubernetes.io/projected/502aea63-b1be-4c9e-850b-bc5a2503b628-kube-api-access-fgb5d\") on node \"crc\" DevicePath \"\"" Feb 26 12:16:04 crc kubenswrapper[4699]: I0226 12:16:04.684407 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535136-zr5lr" event={"ID":"502aea63-b1be-4c9e-850b-bc5a2503b628","Type":"ContainerDied","Data":"a3680e8d9d354f766a8d1a4052690172c397bfc0e5fac85d7ed29e97ed123cae"} Feb 26 12:16:04 crc kubenswrapper[4699]: I0226 12:16:04.684453 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3680e8d9d354f766a8d1a4052690172c397bfc0e5fac85d7ed29e97ed123cae" Feb 26 12:16:04 crc kubenswrapper[4699]: I0226 12:16:04.684456 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535136-zr5lr" Feb 26 12:16:05 crc kubenswrapper[4699]: I0226 12:16:05.141776 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535130-v52gt"] Feb 26 12:16:05 crc kubenswrapper[4699]: I0226 12:16:05.159227 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535130-v52gt"] Feb 26 12:16:06 crc kubenswrapper[4699]: I0226 12:16:06.270091 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="524c38c5-5560-45a6-aa15-3010000b2165" path="/var/lib/kubelet/pods/524c38c5-5560-45a6-aa15-3010000b2165/volumes" Feb 26 12:16:19 crc kubenswrapper[4699]: I0226 12:16:19.261766 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:16:19 crc kubenswrapper[4699]: E0226 12:16:19.262602 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:16:31 crc kubenswrapper[4699]: I0226 12:16:31.261202 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:16:31 crc kubenswrapper[4699]: E0226 12:16:31.261963 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:16:40 crc kubenswrapper[4699]: I0226 12:16:40.150084 4699 scope.go:117] "RemoveContainer" containerID="548a0e8c1b14580465351f41c66bafc1b217669a68f00a69bd71038d87540f9f" Feb 26 12:16:40 crc kubenswrapper[4699]: I0226 12:16:40.176757 4699 scope.go:117] "RemoveContainer" containerID="b1e1f8248ccd17084f1b3aa21ad1265018f7368ffdb4ddbf286721c65474aad5" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.060527 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7xmfl"] Feb 26 12:16:46 crc kubenswrapper[4699]: E0226 12:16:46.061797 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502aea63-b1be-4c9e-850b-bc5a2503b628" containerName="oc" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.061827 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="502aea63-b1be-4c9e-850b-bc5a2503b628" containerName="oc" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.062056 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="502aea63-b1be-4c9e-850b-bc5a2503b628" containerName="oc" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.063399 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.080461 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xmfl"] Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.215164 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-utilities\") pod \"certified-operators-7xmfl\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.215221 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-catalog-content\") pod \"certified-operators-7xmfl\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.215428 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrcvh\" (UniqueName: \"kubernetes.io/projected/50f381e3-7f33-484e-92d9-fae178f3c093-kube-api-access-xrcvh\") pod \"certified-operators-7xmfl\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.267274 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:16:46 crc kubenswrapper[4699]: E0226 12:16:46.267644 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.317867 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-utilities\") pod \"certified-operators-7xmfl\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.317948 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-catalog-content\") pod \"certified-operators-7xmfl\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.318007 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrcvh\" (UniqueName: \"kubernetes.io/projected/50f381e3-7f33-484e-92d9-fae178f3c093-kube-api-access-xrcvh\") pod \"certified-operators-7xmfl\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.318510 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-utilities\") pod \"certified-operators-7xmfl\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.318725 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-catalog-content\") pod \"certified-operators-7xmfl\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.336309 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrcvh\" (UniqueName: \"kubernetes.io/projected/50f381e3-7f33-484e-92d9-fae178f3c093-kube-api-access-xrcvh\") pod \"certified-operators-7xmfl\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.392152 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:46 crc kubenswrapper[4699]: I0226 12:16:46.913558 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xmfl"] Feb 26 12:16:47 crc kubenswrapper[4699]: I0226 12:16:47.094328 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xmfl" event={"ID":"50f381e3-7f33-484e-92d9-fae178f3c093","Type":"ContainerStarted","Data":"a6d5b25b4d177d8555d869e48757c6481e857e538c0ed0ef4d0db5b527a06ba0"} Feb 26 12:16:48 crc kubenswrapper[4699]: I0226 12:16:48.105056 4699 generic.go:334] "Generic (PLEG): container finished" podID="50f381e3-7f33-484e-92d9-fae178f3c093" containerID="e735c3339391fd0f13911f50b898e1f2278f444401de37ed0fd6df5e46d1bba4" exitCode=0 Feb 26 12:16:48 crc kubenswrapper[4699]: I0226 12:16:48.105164 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xmfl" event={"ID":"50f381e3-7f33-484e-92d9-fae178f3c093","Type":"ContainerDied","Data":"e735c3339391fd0f13911f50b898e1f2278f444401de37ed0fd6df5e46d1bba4"} Feb 26 12:16:50 crc kubenswrapper[4699]: I0226 12:16:50.126090 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xmfl" event={"ID":"50f381e3-7f33-484e-92d9-fae178f3c093","Type":"ContainerDied","Data":"b3f88c4dbc90d0e0841ba117e9a0bc5f2795f5cfd24580f6eef4ea3f84df4552"} Feb 26 12:16:50 crc kubenswrapper[4699]: I0226 12:16:50.126839 4699 generic.go:334] "Generic (PLEG): container finished" podID="50f381e3-7f33-484e-92d9-fae178f3c093" containerID="b3f88c4dbc90d0e0841ba117e9a0bc5f2795f5cfd24580f6eef4ea3f84df4552" exitCode=0 Feb 26 12:16:52 crc kubenswrapper[4699]: I0226 12:16:52.147190 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xmfl" event={"ID":"50f381e3-7f33-484e-92d9-fae178f3c093","Type":"ContainerStarted","Data":"17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54"} Feb 26 12:16:52 crc kubenswrapper[4699]: I0226 12:16:52.172378 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7xmfl" podStartSLOduration=2.384850505 podStartE2EDuration="6.17235088s" podCreationTimestamp="2026-02-26 12:16:46 +0000 UTC" firstStartedPulling="2026-02-26 12:16:48.107497766 +0000 UTC m=+3953.918324210" lastFinishedPulling="2026-02-26 12:16:51.894998151 +0000 UTC m=+3957.705824585" observedRunningTime="2026-02-26 12:16:52.166490764 +0000 UTC m=+3957.977317198" watchObservedRunningTime="2026-02-26 12:16:52.17235088 +0000 UTC m=+3957.983177314" Feb 26 12:16:56 crc kubenswrapper[4699]: I0226 12:16:56.392692 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:56 crc kubenswrapper[4699]: I0226 12:16:56.393206 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:56 crc kubenswrapper[4699]: I0226 12:16:56.437459 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:57 crc kubenswrapper[4699]: I0226 12:16:57.248064 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:57 crc kubenswrapper[4699]: I0226 12:16:57.293436 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xmfl"] Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.224503 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7xmfl" podUID="50f381e3-7f33-484e-92d9-fae178f3c093" containerName="registry-server" containerID="cri-o://17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54" gracePeriod=2 Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.261889 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:16:59 crc kubenswrapper[4699]: E0226 12:16:59.262277 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.741847 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.876742 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-utilities\") pod \"50f381e3-7f33-484e-92d9-fae178f3c093\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.876973 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrcvh\" (UniqueName: \"kubernetes.io/projected/50f381e3-7f33-484e-92d9-fae178f3c093-kube-api-access-xrcvh\") pod \"50f381e3-7f33-484e-92d9-fae178f3c093\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.877006 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-catalog-content\") pod \"50f381e3-7f33-484e-92d9-fae178f3c093\" (UID: \"50f381e3-7f33-484e-92d9-fae178f3c093\") " Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.877745 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-utilities" (OuterVolumeSpecName: "utilities") pod "50f381e3-7f33-484e-92d9-fae178f3c093" (UID: "50f381e3-7f33-484e-92d9-fae178f3c093"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.882368 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f381e3-7f33-484e-92d9-fae178f3c093-kube-api-access-xrcvh" (OuterVolumeSpecName: "kube-api-access-xrcvh") pod "50f381e3-7f33-484e-92d9-fae178f3c093" (UID: "50f381e3-7f33-484e-92d9-fae178f3c093"). InnerVolumeSpecName "kube-api-access-xrcvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.944519 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50f381e3-7f33-484e-92d9-fae178f3c093" (UID: "50f381e3-7f33-484e-92d9-fae178f3c093"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.979874 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrcvh\" (UniqueName: \"kubernetes.io/projected/50f381e3-7f33-484e-92d9-fae178f3c093-kube-api-access-xrcvh\") on node \"crc\" DevicePath \"\"" Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.979915 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:16:59 crc kubenswrapper[4699]: I0226 12:16:59.979924 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50f381e3-7f33-484e-92d9-fae178f3c093-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.235375 4699 generic.go:334] "Generic (PLEG): container finished" podID="50f381e3-7f33-484e-92d9-fae178f3c093" containerID="17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54" exitCode=0 Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.235419 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xmfl" event={"ID":"50f381e3-7f33-484e-92d9-fae178f3c093","Type":"ContainerDied","Data":"17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54"} Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.235444 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xmfl" event={"ID":"50f381e3-7f33-484e-92d9-fae178f3c093","Type":"ContainerDied","Data":"a6d5b25b4d177d8555d869e48757c6481e857e538c0ed0ef4d0db5b527a06ba0"} Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.235447 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xmfl" Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.235461 4699 scope.go:117] "RemoveContainer" containerID="17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54" Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.258509 4699 scope.go:117] "RemoveContainer" containerID="b3f88c4dbc90d0e0841ba117e9a0bc5f2795f5cfd24580f6eef4ea3f84df4552" Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.285975 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xmfl"] Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.286149 4699 scope.go:117] "RemoveContainer" containerID="e735c3339391fd0f13911f50b898e1f2278f444401de37ed0fd6df5e46d1bba4" Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.292163 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7xmfl"] Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.330212 4699 scope.go:117] "RemoveContainer" containerID="17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54" Feb 26 12:17:00 crc kubenswrapper[4699]: E0226 12:17:00.330835 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54\": container with ID starting with 17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54 not found: ID does not exist" containerID="17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54" Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.330951 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54"} err="failed to get container status \"17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54\": rpc error: code = NotFound desc = could not find container \"17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54\": container with ID starting with 17dd68a15017639b9cdb1f6b050d9eec24aaeef2b101103c764ca0a655082d54 not found: ID does not exist" Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.331046 4699 scope.go:117] "RemoveContainer" containerID="b3f88c4dbc90d0e0841ba117e9a0bc5f2795f5cfd24580f6eef4ea3f84df4552" Feb 26 12:17:00 crc kubenswrapper[4699]: E0226 12:17:00.335283 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3f88c4dbc90d0e0841ba117e9a0bc5f2795f5cfd24580f6eef4ea3f84df4552\": container with ID starting with b3f88c4dbc90d0e0841ba117e9a0bc5f2795f5cfd24580f6eef4ea3f84df4552 not found: ID does not exist" containerID="b3f88c4dbc90d0e0841ba117e9a0bc5f2795f5cfd24580f6eef4ea3f84df4552" Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.335314 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f88c4dbc90d0e0841ba117e9a0bc5f2795f5cfd24580f6eef4ea3f84df4552"} err="failed to get container status \"b3f88c4dbc90d0e0841ba117e9a0bc5f2795f5cfd24580f6eef4ea3f84df4552\": rpc error: code = NotFound desc = could not find container \"b3f88c4dbc90d0e0841ba117e9a0bc5f2795f5cfd24580f6eef4ea3f84df4552\": container with ID starting with b3f88c4dbc90d0e0841ba117e9a0bc5f2795f5cfd24580f6eef4ea3f84df4552 not found: ID does not exist" Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.335331 4699 scope.go:117] "RemoveContainer" containerID="e735c3339391fd0f13911f50b898e1f2278f444401de37ed0fd6df5e46d1bba4" Feb 26 12:17:00 crc kubenswrapper[4699]: E0226 12:17:00.335587 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e735c3339391fd0f13911f50b898e1f2278f444401de37ed0fd6df5e46d1bba4\": container with ID starting with e735c3339391fd0f13911f50b898e1f2278f444401de37ed0fd6df5e46d1bba4 not found: ID does not exist" containerID="e735c3339391fd0f13911f50b898e1f2278f444401de37ed0fd6df5e46d1bba4" Feb 26 12:17:00 crc kubenswrapper[4699]: I0226 12:17:00.335617 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e735c3339391fd0f13911f50b898e1f2278f444401de37ed0fd6df5e46d1bba4"} err="failed to get container status \"e735c3339391fd0f13911f50b898e1f2278f444401de37ed0fd6df5e46d1bba4\": rpc error: code = NotFound desc = could not find container \"e735c3339391fd0f13911f50b898e1f2278f444401de37ed0fd6df5e46d1bba4\": container with ID starting with e735c3339391fd0f13911f50b898e1f2278f444401de37ed0fd6df5e46d1bba4 not found: ID does not exist" Feb 26 12:17:02 crc kubenswrapper[4699]: I0226 12:17:02.270407 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f381e3-7f33-484e-92d9-fae178f3c093" path="/var/lib/kubelet/pods/50f381e3-7f33-484e-92d9-fae178f3c093/volumes" Feb 26 12:17:12 crc kubenswrapper[4699]: I0226 12:17:12.261451 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:17:13 crc kubenswrapper[4699]: I0226 12:17:13.370155 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"321e80bae8579e8007aa1cc495575fd7eef57d9379aadf703c862dea223958e5"} Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.317466 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-97nmz"] Feb 26 12:17:29 crc kubenswrapper[4699]: E0226 12:17:29.318431 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f381e3-7f33-484e-92d9-fae178f3c093" containerName="extract-utilities" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.318446 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f381e3-7f33-484e-92d9-fae178f3c093" containerName="extract-utilities" Feb 26 12:17:29 crc kubenswrapper[4699]: E0226 12:17:29.318455 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f381e3-7f33-484e-92d9-fae178f3c093" containerName="extract-content" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.318460 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f381e3-7f33-484e-92d9-fae178f3c093" containerName="extract-content" Feb 26 12:17:29 crc kubenswrapper[4699]: E0226 12:17:29.318482 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f381e3-7f33-484e-92d9-fae178f3c093" containerName="registry-server" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.318488 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f381e3-7f33-484e-92d9-fae178f3c093" containerName="registry-server" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.318661 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f381e3-7f33-484e-92d9-fae178f3c093" containerName="registry-server" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.320201 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.352834 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97nmz"] Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.451824 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-catalog-content\") pod \"redhat-marketplace-97nmz\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.452060 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljf8g\" (UniqueName: \"kubernetes.io/projected/5f9a62e3-8b3a-4741-93f3-910d206d1bde-kube-api-access-ljf8g\") pod \"redhat-marketplace-97nmz\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.452235 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-utilities\") pod \"redhat-marketplace-97nmz\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.554072 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-catalog-content\") pod \"redhat-marketplace-97nmz\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.554183 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljf8g\" (UniqueName: \"kubernetes.io/projected/5f9a62e3-8b3a-4741-93f3-910d206d1bde-kube-api-access-ljf8g\") pod \"redhat-marketplace-97nmz\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.554249 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-utilities\") pod \"redhat-marketplace-97nmz\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.554615 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-catalog-content\") pod \"redhat-marketplace-97nmz\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.554798 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-utilities\") pod \"redhat-marketplace-97nmz\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.574761 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljf8g\" (UniqueName: \"kubernetes.io/projected/5f9a62e3-8b3a-4741-93f3-910d206d1bde-kube-api-access-ljf8g\") pod \"redhat-marketplace-97nmz\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:29 crc kubenswrapper[4699]: I0226 12:17:29.647451 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:30 crc kubenswrapper[4699]: I0226 12:17:30.172077 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97nmz"] Feb 26 12:17:30 crc kubenswrapper[4699]: I0226 12:17:30.529274 4699 generic.go:334] "Generic (PLEG): container finished" podID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" containerID="04bee53dc64cf311afb693c3d6c9eb4ce7e191e7b22c1fba0792d43c3e39874f" exitCode=0 Feb 26 12:17:30 crc kubenswrapper[4699]: I0226 12:17:30.529333 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97nmz" event={"ID":"5f9a62e3-8b3a-4741-93f3-910d206d1bde","Type":"ContainerDied","Data":"04bee53dc64cf311afb693c3d6c9eb4ce7e191e7b22c1fba0792d43c3e39874f"} Feb 26 12:17:30 crc kubenswrapper[4699]: I0226 12:17:30.529555 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97nmz" event={"ID":"5f9a62e3-8b3a-4741-93f3-910d206d1bde","Type":"ContainerStarted","Data":"9636133c3447885de744ca24637f9c92a5da0be59cc4b57a4b972a5ae93e8659"} Feb 26 12:17:31 crc kubenswrapper[4699]: I0226 12:17:31.540652 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97nmz" event={"ID":"5f9a62e3-8b3a-4741-93f3-910d206d1bde","Type":"ContainerStarted","Data":"0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac"} Feb 26 12:17:32 crc kubenswrapper[4699]: I0226 12:17:32.551075 4699 generic.go:334] "Generic (PLEG): container finished" podID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" containerID="0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac" exitCode=0 Feb 26 12:17:32 crc kubenswrapper[4699]: I0226 12:17:32.551179 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97nmz" event={"ID":"5f9a62e3-8b3a-4741-93f3-910d206d1bde","Type":"ContainerDied","Data":"0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac"} Feb 26 12:17:33 crc kubenswrapper[4699]: I0226 12:17:33.562308 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97nmz" event={"ID":"5f9a62e3-8b3a-4741-93f3-910d206d1bde","Type":"ContainerStarted","Data":"35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d"} Feb 26 12:17:33 crc kubenswrapper[4699]: I0226 12:17:33.607141 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-97nmz" podStartSLOduration=2.212645584 podStartE2EDuration="4.607098911s" podCreationTimestamp="2026-02-26 12:17:29 +0000 UTC" firstStartedPulling="2026-02-26 12:17:30.531078007 +0000 UTC m=+3996.341904451" lastFinishedPulling="2026-02-26 12:17:32.925531344 +0000 UTC m=+3998.736357778" observedRunningTime="2026-02-26 12:17:33.585795488 +0000 UTC m=+3999.396621922" watchObservedRunningTime="2026-02-26 12:17:33.607098911 +0000 UTC m=+3999.417925345" Feb 26 12:17:39 crc kubenswrapper[4699]: I0226 12:17:39.660052 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:39 crc kubenswrapper[4699]: I0226 12:17:39.660611 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:39 crc kubenswrapper[4699]: I0226 12:17:39.716760 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:40 crc kubenswrapper[4699]: I0226 12:17:40.301401 4699 scope.go:117] "RemoveContainer" containerID="1830ebb83943317d1452f94ddd1bbd24c88d43dcb4e4541e0c9c10d16e425c29" Feb 26 12:17:40 crc kubenswrapper[4699]: I0226 12:17:40.691852 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:40 crc kubenswrapper[4699]: I0226 12:17:40.743935 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-97nmz"] Feb 26 12:17:42 crc kubenswrapper[4699]: I0226 12:17:42.650395 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-97nmz" podUID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" containerName="registry-server" containerID="cri-o://35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d" gracePeriod=2 Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.161976 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.344765 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-utilities\") pod \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.344907 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-catalog-content\") pod \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.345005 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljf8g\" (UniqueName: \"kubernetes.io/projected/5f9a62e3-8b3a-4741-93f3-910d206d1bde-kube-api-access-ljf8g\") pod \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\" (UID: \"5f9a62e3-8b3a-4741-93f3-910d206d1bde\") " Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.345608 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-utilities" (OuterVolumeSpecName: "utilities") pod "5f9a62e3-8b3a-4741-93f3-910d206d1bde" (UID: "5f9a62e3-8b3a-4741-93f3-910d206d1bde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.381293 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f9a62e3-8b3a-4741-93f3-910d206d1bde" (UID: "5f9a62e3-8b3a-4741-93f3-910d206d1bde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.447326 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.447383 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9a62e3-8b3a-4741-93f3-910d206d1bde-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.626294 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9a62e3-8b3a-4741-93f3-910d206d1bde-kube-api-access-ljf8g" (OuterVolumeSpecName: "kube-api-access-ljf8g") pod "5f9a62e3-8b3a-4741-93f3-910d206d1bde" (UID: "5f9a62e3-8b3a-4741-93f3-910d206d1bde"). InnerVolumeSpecName "kube-api-access-ljf8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.650768 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljf8g\" (UniqueName: \"kubernetes.io/projected/5f9a62e3-8b3a-4741-93f3-910d206d1bde-kube-api-access-ljf8g\") on node \"crc\" DevicePath \"\"" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.665267 4699 generic.go:334] "Generic (PLEG): container finished" podID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" containerID="35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d" exitCode=0 Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.665329 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97nmz" event={"ID":"5f9a62e3-8b3a-4741-93f3-910d206d1bde","Type":"ContainerDied","Data":"35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d"} Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.665365 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97nmz" event={"ID":"5f9a62e3-8b3a-4741-93f3-910d206d1bde","Type":"ContainerDied","Data":"9636133c3447885de744ca24637f9c92a5da0be59cc4b57a4b972a5ae93e8659"} Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.665388 4699 scope.go:117] "RemoveContainer" containerID="35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.665574 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97nmz" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.692314 4699 scope.go:117] "RemoveContainer" containerID="0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.714360 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-97nmz"] Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.722912 4699 scope.go:117] "RemoveContainer" containerID="04bee53dc64cf311afb693c3d6c9eb4ce7e191e7b22c1fba0792d43c3e39874f" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.726990 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-97nmz"] Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.768080 4699 scope.go:117] "RemoveContainer" containerID="35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d" Feb 26 12:17:43 crc kubenswrapper[4699]: E0226 12:17:43.768669 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d\": container with ID starting with 35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d not found: ID does not exist" containerID="35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.768757 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d"} err="failed to get container status \"35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d\": rpc error: code = NotFound desc = could not find container \"35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d\": container with ID starting with 35f7239c988fe25e7011c5f22473f2854bb1a482de1b024e4b07254c4b2ccb5d not found: ID does not exist" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.768807 4699 scope.go:117] "RemoveContainer" containerID="0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac" Feb 26 12:17:43 crc kubenswrapper[4699]: E0226 12:17:43.769261 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac\": container with ID starting with 0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac not found: ID does not exist" containerID="0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.769292 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac"} err="failed to get container status \"0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac\": rpc error: code = NotFound desc = could not find container \"0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac\": container with ID starting with 0143e2f9109b3c28c30aa19cdf7638765b67fdbdd672ecbff1c0d603640189ac not found: ID does not exist" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.769308 4699 scope.go:117] "RemoveContainer" containerID="04bee53dc64cf311afb693c3d6c9eb4ce7e191e7b22c1fba0792d43c3e39874f" Feb 26 12:17:43 crc kubenswrapper[4699]: E0226 12:17:43.769567 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04bee53dc64cf311afb693c3d6c9eb4ce7e191e7b22c1fba0792d43c3e39874f\": container with ID starting with 04bee53dc64cf311afb693c3d6c9eb4ce7e191e7b22c1fba0792d43c3e39874f not found: ID does not exist" containerID="04bee53dc64cf311afb693c3d6c9eb4ce7e191e7b22c1fba0792d43c3e39874f" Feb 26 12:17:43 crc kubenswrapper[4699]: I0226 12:17:43.769588 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04bee53dc64cf311afb693c3d6c9eb4ce7e191e7b22c1fba0792d43c3e39874f"} err="failed to get container status \"04bee53dc64cf311afb693c3d6c9eb4ce7e191e7b22c1fba0792d43c3e39874f\": rpc error: code = NotFound desc = could not find container \"04bee53dc64cf311afb693c3d6c9eb4ce7e191e7b22c1fba0792d43c3e39874f\": container with ID starting with 04bee53dc64cf311afb693c3d6c9eb4ce7e191e7b22c1fba0792d43c3e39874f not found: ID does not exist" Feb 26 12:17:44 crc kubenswrapper[4699]: I0226 12:17:44.282576 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" path="/var/lib/kubelet/pods/5f9a62e3-8b3a-4741-93f3-910d206d1bde/volumes" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.148410 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535138-ghxdv"] Feb 26 12:18:00 crc kubenswrapper[4699]: E0226 12:18:00.149384 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" containerName="extract-utilities" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.149398 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" containerName="extract-utilities" Feb 26 12:18:00 crc kubenswrapper[4699]: E0226 12:18:00.149411 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" containerName="registry-server" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.149420 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" containerName="registry-server" Feb 26 12:18:00 crc kubenswrapper[4699]: E0226 12:18:00.149429 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" containerName="extract-content" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.149434 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" containerName="extract-content" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.149676 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f9a62e3-8b3a-4741-93f3-910d206d1bde" containerName="registry-server" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.151955 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535138-ghxdv" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.154393 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.154687 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.156906 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.161998 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535138-ghxdv"] Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.302041 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bd8m\" (UniqueName: \"kubernetes.io/projected/3d73a20e-eea0-421b-8efd-6fd86f1e4d98-kube-api-access-9bd8m\") pod \"auto-csr-approver-29535138-ghxdv\" (UID: \"3d73a20e-eea0-421b-8efd-6fd86f1e4d98\") " pod="openshift-infra/auto-csr-approver-29535138-ghxdv" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.403847 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bd8m\" (UniqueName: \"kubernetes.io/projected/3d73a20e-eea0-421b-8efd-6fd86f1e4d98-kube-api-access-9bd8m\") pod \"auto-csr-approver-29535138-ghxdv\" (UID: \"3d73a20e-eea0-421b-8efd-6fd86f1e4d98\") " pod="openshift-infra/auto-csr-approver-29535138-ghxdv" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.420860 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bd8m\" (UniqueName: \"kubernetes.io/projected/3d73a20e-eea0-421b-8efd-6fd86f1e4d98-kube-api-access-9bd8m\") pod \"auto-csr-approver-29535138-ghxdv\" (UID: \"3d73a20e-eea0-421b-8efd-6fd86f1e4d98\") " pod="openshift-infra/auto-csr-approver-29535138-ghxdv" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.482473 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535138-ghxdv" Feb 26 12:18:00 crc kubenswrapper[4699]: I0226 12:18:00.949219 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535138-ghxdv"] Feb 26 12:18:00 crc kubenswrapper[4699]: W0226 12:18:00.953924 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d73a20e_eea0_421b_8efd_6fd86f1e4d98.slice/crio-cc10b7fb9885394124fc9da8f75eb4cdb80b5176fd79644199ce9dd857327c0b WatchSource:0}: Error finding container cc10b7fb9885394124fc9da8f75eb4cdb80b5176fd79644199ce9dd857327c0b: Status 404 returned error can't find the container with id cc10b7fb9885394124fc9da8f75eb4cdb80b5176fd79644199ce9dd857327c0b Feb 26 12:18:01 crc kubenswrapper[4699]: I0226 12:18:01.878925 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535138-ghxdv" event={"ID":"3d73a20e-eea0-421b-8efd-6fd86f1e4d98","Type":"ContainerStarted","Data":"cc10b7fb9885394124fc9da8f75eb4cdb80b5176fd79644199ce9dd857327c0b"} Feb 26 12:18:02 crc kubenswrapper[4699]: I0226 12:18:02.891422 4699 generic.go:334] "Generic (PLEG): container finished" podID="3d73a20e-eea0-421b-8efd-6fd86f1e4d98" containerID="206617da387e97d81b9b831e8d26536a56cede7f0a2daac8fe00d38d64e627ce" exitCode=0 Feb 26 12:18:02 crc kubenswrapper[4699]: I0226 12:18:02.891531 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535138-ghxdv" event={"ID":"3d73a20e-eea0-421b-8efd-6fd86f1e4d98","Type":"ContainerDied","Data":"206617da387e97d81b9b831e8d26536a56cede7f0a2daac8fe00d38d64e627ce"} Feb 26 12:18:04 crc kubenswrapper[4699]: I0226 12:18:04.319006 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535138-ghxdv" Feb 26 12:18:04 crc kubenswrapper[4699]: I0226 12:18:04.483380 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bd8m\" (UniqueName: \"kubernetes.io/projected/3d73a20e-eea0-421b-8efd-6fd86f1e4d98-kube-api-access-9bd8m\") pod \"3d73a20e-eea0-421b-8efd-6fd86f1e4d98\" (UID: \"3d73a20e-eea0-421b-8efd-6fd86f1e4d98\") " Feb 26 12:18:04 crc kubenswrapper[4699]: I0226 12:18:04.488855 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d73a20e-eea0-421b-8efd-6fd86f1e4d98-kube-api-access-9bd8m" (OuterVolumeSpecName: "kube-api-access-9bd8m") pod "3d73a20e-eea0-421b-8efd-6fd86f1e4d98" (UID: "3d73a20e-eea0-421b-8efd-6fd86f1e4d98"). InnerVolumeSpecName "kube-api-access-9bd8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:18:04 crc kubenswrapper[4699]: I0226 12:18:04.586032 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bd8m\" (UniqueName: \"kubernetes.io/projected/3d73a20e-eea0-421b-8efd-6fd86f1e4d98-kube-api-access-9bd8m\") on node \"crc\" DevicePath \"\"" Feb 26 12:18:04 crc kubenswrapper[4699]: I0226 12:18:04.911596 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535138-ghxdv" event={"ID":"3d73a20e-eea0-421b-8efd-6fd86f1e4d98","Type":"ContainerDied","Data":"cc10b7fb9885394124fc9da8f75eb4cdb80b5176fd79644199ce9dd857327c0b"} Feb 26 12:18:04 crc kubenswrapper[4699]: I0226 12:18:04.912038 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc10b7fb9885394124fc9da8f75eb4cdb80b5176fd79644199ce9dd857327c0b" Feb 26 12:18:04 crc kubenswrapper[4699]: I0226 12:18:04.911859 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535138-ghxdv" Feb 26 12:18:05 crc kubenswrapper[4699]: I0226 12:18:05.380835 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535132-hr4rf"] Feb 26 12:18:05 crc kubenswrapper[4699]: I0226 12:18:05.403643 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535132-hr4rf"] Feb 26 12:18:06 crc kubenswrapper[4699]: I0226 12:18:06.273316 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff3c3e0-4f58-401c-9f5f-b733727f73ff" path="/var/lib/kubelet/pods/5ff3c3e0-4f58-401c-9f5f-b733727f73ff/volumes" Feb 26 12:18:40 crc kubenswrapper[4699]: I0226 12:18:40.364307 4699 scope.go:117] "RemoveContainer" containerID="747ddaa984d13eaf0f8ee9e7ae1b9299bffa91ea051e4eb23c1b1a2ab2aaf402" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.444791 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l2l5g/must-gather-zwd9v"] Feb 26 12:18:48 crc kubenswrapper[4699]: E0226 12:18:48.445686 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d73a20e-eea0-421b-8efd-6fd86f1e4d98" containerName="oc" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.445701 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d73a20e-eea0-421b-8efd-6fd86f1e4d98" containerName="oc" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.445949 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d73a20e-eea0-421b-8efd-6fd86f1e4d98" containerName="oc" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.447174 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/must-gather-zwd9v" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.453843 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-l2l5g"/"default-dockercfg-hhg4j" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.453939 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l2l5g"/"kube-root-ca.crt" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.454034 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l2l5g"/"openshift-service-ca.crt" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.462359 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l2l5g/must-gather-zwd9v"] Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.609183 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlmbm\" (UniqueName: \"kubernetes.io/projected/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-kube-api-access-zlmbm\") pod \"must-gather-zwd9v\" (UID: \"e1a2e674-d3fd-4fac-b5e0-b201dd644f25\") " pod="openshift-must-gather-l2l5g/must-gather-zwd9v" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.609239 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-must-gather-output\") pod \"must-gather-zwd9v\" (UID: \"e1a2e674-d3fd-4fac-b5e0-b201dd644f25\") " pod="openshift-must-gather-l2l5g/must-gather-zwd9v" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.712546 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlmbm\" (UniqueName: \"kubernetes.io/projected/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-kube-api-access-zlmbm\") pod \"must-gather-zwd9v\" (UID: \"e1a2e674-d3fd-4fac-b5e0-b201dd644f25\") " pod="openshift-must-gather-l2l5g/must-gather-zwd9v" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.712599 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-must-gather-output\") pod \"must-gather-zwd9v\" (UID: \"e1a2e674-d3fd-4fac-b5e0-b201dd644f25\") " pod="openshift-must-gather-l2l5g/must-gather-zwd9v" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.713616 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-must-gather-output\") pod \"must-gather-zwd9v\" (UID: \"e1a2e674-d3fd-4fac-b5e0-b201dd644f25\") " pod="openshift-must-gather-l2l5g/must-gather-zwd9v" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.739099 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlmbm\" (UniqueName: \"kubernetes.io/projected/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-kube-api-access-zlmbm\") pod \"must-gather-zwd9v\" (UID: \"e1a2e674-d3fd-4fac-b5e0-b201dd644f25\") " pod="openshift-must-gather-l2l5g/must-gather-zwd9v" Feb 26 12:18:48 crc kubenswrapper[4699]: I0226 12:18:48.770014 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/must-gather-zwd9v" Feb 26 12:18:49 crc kubenswrapper[4699]: I0226 12:18:49.266504 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l2l5g/must-gather-zwd9v"] Feb 26 12:18:49 crc kubenswrapper[4699]: W0226 12:18:49.271990 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1a2e674_d3fd_4fac_b5e0_b201dd644f25.slice/crio-d3f9348174a99f466fb6975b66d2932e06844b8873e974def4face740c871121 WatchSource:0}: Error finding container d3f9348174a99f466fb6975b66d2932e06844b8873e974def4face740c871121: Status 404 returned error can't find the container with id d3f9348174a99f466fb6975b66d2932e06844b8873e974def4face740c871121 Feb 26 12:18:49 crc kubenswrapper[4699]: I0226 12:18:49.350399 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/must-gather-zwd9v" event={"ID":"e1a2e674-d3fd-4fac-b5e0-b201dd644f25","Type":"ContainerStarted","Data":"d3f9348174a99f466fb6975b66d2932e06844b8873e974def4face740c871121"} Feb 26 12:18:50 crc kubenswrapper[4699]: I0226 12:18:50.368568 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/must-gather-zwd9v" event={"ID":"e1a2e674-d3fd-4fac-b5e0-b201dd644f25","Type":"ContainerStarted","Data":"69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e"} Feb 26 12:18:50 crc kubenswrapper[4699]: I0226 12:18:50.368625 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/must-gather-zwd9v" event={"ID":"e1a2e674-d3fd-4fac-b5e0-b201dd644f25","Type":"ContainerStarted","Data":"f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0"} Feb 26 12:18:50 crc kubenswrapper[4699]: I0226 12:18:50.400539 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l2l5g/must-gather-zwd9v" podStartSLOduration=2.400512573 podStartE2EDuration="2.400512573s" podCreationTimestamp="2026-02-26 12:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 12:18:50.386597419 +0000 UTC m=+4076.197423863" watchObservedRunningTime="2026-02-26 12:18:50.400512573 +0000 UTC m=+4076.211339017" Feb 26 12:18:51 crc kubenswrapper[4699]: E0226 12:18:51.620157 4699 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.213:38684->38.102.83.213:34509: write tcp 38.102.83.213:38684->38.102.83.213:34509: write: connection reset by peer Feb 26 12:18:53 crc kubenswrapper[4699]: I0226 12:18:53.525444 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l2l5g/crc-debug-j6ff4"] Feb 26 12:18:53 crc kubenswrapper[4699]: I0226 12:18:53.529153 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" Feb 26 12:18:53 crc kubenswrapper[4699]: I0226 12:18:53.607544 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmtmt\" (UniqueName: \"kubernetes.io/projected/01c07c4e-7806-421d-abac-3a7288adae16-kube-api-access-zmtmt\") pod \"crc-debug-j6ff4\" (UID: \"01c07c4e-7806-421d-abac-3a7288adae16\") " pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" Feb 26 12:18:53 crc kubenswrapper[4699]: I0226 12:18:53.607586 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01c07c4e-7806-421d-abac-3a7288adae16-host\") pod \"crc-debug-j6ff4\" (UID: \"01c07c4e-7806-421d-abac-3a7288adae16\") " pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" Feb 26 12:18:53 crc kubenswrapper[4699]: I0226 12:18:53.709072 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmtmt\" (UniqueName: \"kubernetes.io/projected/01c07c4e-7806-421d-abac-3a7288adae16-kube-api-access-zmtmt\") pod \"crc-debug-j6ff4\" (UID: \"01c07c4e-7806-421d-abac-3a7288adae16\") " pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" Feb 26 12:18:53 crc kubenswrapper[4699]: I0226 12:18:53.709126 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01c07c4e-7806-421d-abac-3a7288adae16-host\") pod \"crc-debug-j6ff4\" (UID: \"01c07c4e-7806-421d-abac-3a7288adae16\") " pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" Feb 26 12:18:53 crc kubenswrapper[4699]: I0226 12:18:53.709448 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01c07c4e-7806-421d-abac-3a7288adae16-host\") pod \"crc-debug-j6ff4\" (UID: \"01c07c4e-7806-421d-abac-3a7288adae16\") " pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" Feb 26 12:18:53 crc kubenswrapper[4699]: I0226 12:18:53.733043 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmtmt\" (UniqueName: \"kubernetes.io/projected/01c07c4e-7806-421d-abac-3a7288adae16-kube-api-access-zmtmt\") pod \"crc-debug-j6ff4\" (UID: \"01c07c4e-7806-421d-abac-3a7288adae16\") " pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" Feb 26 12:18:53 crc kubenswrapper[4699]: I0226 12:18:53.849557 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" Feb 26 12:18:53 crc kubenswrapper[4699]: W0226 12:18:53.882647 4699 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01c07c4e_7806_421d_abac_3a7288adae16.slice/crio-048b22d49df716dfdf18bf0b5a37794f7b9ab766a5ef5901add33896fa293116 WatchSource:0}: Error finding container 048b22d49df716dfdf18bf0b5a37794f7b9ab766a5ef5901add33896fa293116: Status 404 returned error can't find the container with id 048b22d49df716dfdf18bf0b5a37794f7b9ab766a5ef5901add33896fa293116 Feb 26 12:18:54 crc kubenswrapper[4699]: I0226 12:18:54.408037 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" event={"ID":"01c07c4e-7806-421d-abac-3a7288adae16","Type":"ContainerStarted","Data":"914b1b71a76d7ca6020b99c9b97dbd825c08917a7129daad95e968dab1ca96e3"} Feb 26 12:18:54 crc kubenswrapper[4699]: I0226 12:18:54.408374 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" event={"ID":"01c07c4e-7806-421d-abac-3a7288adae16","Type":"ContainerStarted","Data":"048b22d49df716dfdf18bf0b5a37794f7b9ab766a5ef5901add33896fa293116"} Feb 26 12:18:54 crc kubenswrapper[4699]: I0226 12:18:54.432718 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" podStartSLOduration=1.432694492 podStartE2EDuration="1.432694492s" podCreationTimestamp="2026-02-26 12:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 12:18:54.421710691 +0000 UTC m=+4080.232537135" watchObservedRunningTime="2026-02-26 12:18:54.432694492 +0000 UTC m=+4080.243520946" Feb 26 12:19:25 crc kubenswrapper[4699]: I0226 12:19:25.702930 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wrd9m"] Feb 26 12:19:25 crc kubenswrapper[4699]: I0226 12:19:25.705979 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:25 crc kubenswrapper[4699]: I0226 12:19:25.720930 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wrd9m"] Feb 26 12:19:25 crc kubenswrapper[4699]: I0226 12:19:25.885188 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-utilities\") pod \"redhat-operators-wrd9m\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:25 crc kubenswrapper[4699]: I0226 12:19:25.885746 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-catalog-content\") pod \"redhat-operators-wrd9m\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:25 crc kubenswrapper[4699]: I0226 12:19:25.886068 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbd45\" (UniqueName: \"kubernetes.io/projected/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-kube-api-access-cbd45\") pod \"redhat-operators-wrd9m\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:25 crc kubenswrapper[4699]: I0226 12:19:25.987389 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbd45\" (UniqueName: \"kubernetes.io/projected/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-kube-api-access-cbd45\") pod \"redhat-operators-wrd9m\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:25 crc kubenswrapper[4699]: I0226 12:19:25.987746 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-utilities\") pod \"redhat-operators-wrd9m\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:25 crc kubenswrapper[4699]: I0226 12:19:25.987873 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-catalog-content\") pod \"redhat-operators-wrd9m\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:25 crc kubenswrapper[4699]: I0226 12:19:25.988324 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-catalog-content\") pod \"redhat-operators-wrd9m\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:25 crc kubenswrapper[4699]: I0226 12:19:25.988402 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-utilities\") pod \"redhat-operators-wrd9m\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:26 crc kubenswrapper[4699]: I0226 12:19:26.020814 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbd45\" (UniqueName: \"kubernetes.io/projected/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-kube-api-access-cbd45\") pod \"redhat-operators-wrd9m\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:26 crc kubenswrapper[4699]: I0226 12:19:26.038254 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:26 crc kubenswrapper[4699]: I0226 12:19:26.530010 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wrd9m"] Feb 26 12:19:26 crc kubenswrapper[4699]: I0226 12:19:26.699492 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrd9m" event={"ID":"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf","Type":"ContainerStarted","Data":"d385ffaad2418df1f042a6db3f0181c3457f9bd79043978aea4ed59564bf6651"} Feb 26 12:19:27 crc kubenswrapper[4699]: I0226 12:19:27.738059 4699 generic.go:334] "Generic (PLEG): container finished" podID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerID="2daebaa5370afaad83a99ae877015f25f2523dc3704a3d0897f2dfb9d894739f" exitCode=0 Feb 26 12:19:27 crc kubenswrapper[4699]: I0226 12:19:27.738130 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrd9m" event={"ID":"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf","Type":"ContainerDied","Data":"2daebaa5370afaad83a99ae877015f25f2523dc3704a3d0897f2dfb9d894739f"} Feb 26 12:19:27 crc kubenswrapper[4699]: I0226 12:19:27.742020 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:19:29 crc kubenswrapper[4699]: I0226 12:19:29.759066 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrd9m" event={"ID":"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf","Type":"ContainerStarted","Data":"7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e"} Feb 26 12:19:30 crc kubenswrapper[4699]: I0226 12:19:30.770532 4699 generic.go:334] "Generic (PLEG): container finished" podID="01c07c4e-7806-421d-abac-3a7288adae16" containerID="914b1b71a76d7ca6020b99c9b97dbd825c08917a7129daad95e968dab1ca96e3" exitCode=0 Feb 26 12:19:30 crc kubenswrapper[4699]: I0226 12:19:30.770610 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" event={"ID":"01c07c4e-7806-421d-abac-3a7288adae16","Type":"ContainerDied","Data":"914b1b71a76d7ca6020b99c9b97dbd825c08917a7129daad95e968dab1ca96e3"} Feb 26 12:19:31 crc kubenswrapper[4699]: I0226 12:19:31.976335 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" Feb 26 12:19:32 crc kubenswrapper[4699]: I0226 12:19:32.013957 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l2l5g/crc-debug-j6ff4"] Feb 26 12:19:32 crc kubenswrapper[4699]: I0226 12:19:32.021754 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l2l5g/crc-debug-j6ff4"] Feb 26 12:19:32 crc kubenswrapper[4699]: I0226 12:19:32.064682 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmtmt\" (UniqueName: \"kubernetes.io/projected/01c07c4e-7806-421d-abac-3a7288adae16-kube-api-access-zmtmt\") pod \"01c07c4e-7806-421d-abac-3a7288adae16\" (UID: \"01c07c4e-7806-421d-abac-3a7288adae16\") " Feb 26 12:19:32 crc kubenswrapper[4699]: I0226 12:19:32.064852 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01c07c4e-7806-421d-abac-3a7288adae16-host\") pod \"01c07c4e-7806-421d-abac-3a7288adae16\" (UID: \"01c07c4e-7806-421d-abac-3a7288adae16\") " Feb 26 12:19:32 crc kubenswrapper[4699]: I0226 12:19:32.065048 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01c07c4e-7806-421d-abac-3a7288adae16-host" (OuterVolumeSpecName: "host") pod "01c07c4e-7806-421d-abac-3a7288adae16" (UID: "01c07c4e-7806-421d-abac-3a7288adae16"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 12:19:32 crc kubenswrapper[4699]: I0226 12:19:32.065514 4699 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01c07c4e-7806-421d-abac-3a7288adae16-host\") on node \"crc\" DevicePath \"\"" Feb 26 12:19:32 crc kubenswrapper[4699]: I0226 12:19:32.073076 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c07c4e-7806-421d-abac-3a7288adae16-kube-api-access-zmtmt" (OuterVolumeSpecName: "kube-api-access-zmtmt") pod "01c07c4e-7806-421d-abac-3a7288adae16" (UID: "01c07c4e-7806-421d-abac-3a7288adae16"). InnerVolumeSpecName "kube-api-access-zmtmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:19:32 crc kubenswrapper[4699]: I0226 12:19:32.167481 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmtmt\" (UniqueName: \"kubernetes.io/projected/01c07c4e-7806-421d-abac-3a7288adae16-kube-api-access-zmtmt\") on node \"crc\" DevicePath \"\"" Feb 26 12:19:32 crc kubenswrapper[4699]: I0226 12:19:32.272721 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c07c4e-7806-421d-abac-3a7288adae16" path="/var/lib/kubelet/pods/01c07c4e-7806-421d-abac-3a7288adae16/volumes" Feb 26 12:19:32 crc kubenswrapper[4699]: I0226 12:19:32.881571 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-j6ff4" Feb 26 12:19:32 crc kubenswrapper[4699]: I0226 12:19:32.881622 4699 scope.go:117] "RemoveContainer" containerID="914b1b71a76d7ca6020b99c9b97dbd825c08917a7129daad95e968dab1ca96e3" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.207909 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l2l5g/crc-debug-dmh22"] Feb 26 12:19:33 crc kubenswrapper[4699]: E0226 12:19:33.208414 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c07c4e-7806-421d-abac-3a7288adae16" containerName="container-00" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.208432 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c07c4e-7806-421d-abac-3a7288adae16" containerName="container-00" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.208743 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c07c4e-7806-421d-abac-3a7288adae16" containerName="container-00" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.209544 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-dmh22" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.400621 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a70563cf-4017-4654-a730-3bd13e1b3b3a-host\") pod \"crc-debug-dmh22\" (UID: \"a70563cf-4017-4654-a730-3bd13e1b3b3a\") " pod="openshift-must-gather-l2l5g/crc-debug-dmh22" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.400949 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6k2d\" (UniqueName: \"kubernetes.io/projected/a70563cf-4017-4654-a730-3bd13e1b3b3a-kube-api-access-p6k2d\") pod \"crc-debug-dmh22\" (UID: \"a70563cf-4017-4654-a730-3bd13e1b3b3a\") " pod="openshift-must-gather-l2l5g/crc-debug-dmh22" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.504098 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6k2d\" (UniqueName: \"kubernetes.io/projected/a70563cf-4017-4654-a730-3bd13e1b3b3a-kube-api-access-p6k2d\") pod \"crc-debug-dmh22\" (UID: \"a70563cf-4017-4654-a730-3bd13e1b3b3a\") " pod="openshift-must-gather-l2l5g/crc-debug-dmh22" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.504199 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a70563cf-4017-4654-a730-3bd13e1b3b3a-host\") pod \"crc-debug-dmh22\" (UID: \"a70563cf-4017-4654-a730-3bd13e1b3b3a\") " pod="openshift-must-gather-l2l5g/crc-debug-dmh22" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.504346 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a70563cf-4017-4654-a730-3bd13e1b3b3a-host\") pod \"crc-debug-dmh22\" (UID: \"a70563cf-4017-4654-a730-3bd13e1b3b3a\") " pod="openshift-must-gather-l2l5g/crc-debug-dmh22" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.526542 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6k2d\" (UniqueName: \"kubernetes.io/projected/a70563cf-4017-4654-a730-3bd13e1b3b3a-kube-api-access-p6k2d\") pod \"crc-debug-dmh22\" (UID: \"a70563cf-4017-4654-a730-3bd13e1b3b3a\") " pod="openshift-must-gather-l2l5g/crc-debug-dmh22" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.529605 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-dmh22" Feb 26 12:19:33 crc kubenswrapper[4699]: I0226 12:19:33.892910 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/crc-debug-dmh22" event={"ID":"a70563cf-4017-4654-a730-3bd13e1b3b3a","Type":"ContainerStarted","Data":"cf08047c111eeed20efe0e8113b2e168d2dda4bf486d223eef2adf35e628e7e3"} Feb 26 12:19:34 crc kubenswrapper[4699]: I0226 12:19:34.950189 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/crc-debug-dmh22" event={"ID":"a70563cf-4017-4654-a730-3bd13e1b3b3a","Type":"ContainerStarted","Data":"3661f7766c195df1890f39782c6ee0afb458e5fb745113c3cb232308b3d30727"} Feb 26 12:19:34 crc kubenswrapper[4699]: I0226 12:19:34.971711 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l2l5g/crc-debug-dmh22" podStartSLOduration=1.971692563 podStartE2EDuration="1.971692563s" podCreationTimestamp="2026-02-26 12:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 12:19:34.965428836 +0000 UTC m=+4120.776255270" watchObservedRunningTime="2026-02-26 12:19:34.971692563 +0000 UTC m=+4120.782518987" Feb 26 12:19:38 crc kubenswrapper[4699]: I0226 12:19:38.141280 4699 generic.go:334] "Generic (PLEG): container finished" podID="a70563cf-4017-4654-a730-3bd13e1b3b3a" containerID="3661f7766c195df1890f39782c6ee0afb458e5fb745113c3cb232308b3d30727" exitCode=0 Feb 26 12:19:38 crc kubenswrapper[4699]: I0226 12:19:38.141404 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/crc-debug-dmh22" event={"ID":"a70563cf-4017-4654-a730-3bd13e1b3b3a","Type":"ContainerDied","Data":"3661f7766c195df1890f39782c6ee0afb458e5fb745113c3cb232308b3d30727"} Feb 26 12:19:39 crc kubenswrapper[4699]: I0226 12:19:39.261699 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-dmh22" Feb 26 12:19:39 crc kubenswrapper[4699]: I0226 12:19:39.290958 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l2l5g/crc-debug-dmh22"] Feb 26 12:19:39 crc kubenswrapper[4699]: I0226 12:19:39.298920 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l2l5g/crc-debug-dmh22"] Feb 26 12:19:39 crc kubenswrapper[4699]: I0226 12:19:39.446268 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6k2d\" (UniqueName: \"kubernetes.io/projected/a70563cf-4017-4654-a730-3bd13e1b3b3a-kube-api-access-p6k2d\") pod \"a70563cf-4017-4654-a730-3bd13e1b3b3a\" (UID: \"a70563cf-4017-4654-a730-3bd13e1b3b3a\") " Feb 26 12:19:39 crc kubenswrapper[4699]: I0226 12:19:39.446470 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a70563cf-4017-4654-a730-3bd13e1b3b3a-host\") pod \"a70563cf-4017-4654-a730-3bd13e1b3b3a\" (UID: \"a70563cf-4017-4654-a730-3bd13e1b3b3a\") " Feb 26 12:19:39 crc kubenswrapper[4699]: I0226 12:19:39.446752 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70563cf-4017-4654-a730-3bd13e1b3b3a-host" (OuterVolumeSpecName: "host") pod "a70563cf-4017-4654-a730-3bd13e1b3b3a" (UID: "a70563cf-4017-4654-a730-3bd13e1b3b3a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 12:19:39 crc kubenswrapper[4699]: I0226 12:19:39.447133 4699 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a70563cf-4017-4654-a730-3bd13e1b3b3a-host\") on node \"crc\" DevicePath \"\"" Feb 26 12:19:39 crc kubenswrapper[4699]: I0226 12:19:39.452165 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a70563cf-4017-4654-a730-3bd13e1b3b3a-kube-api-access-p6k2d" (OuterVolumeSpecName: "kube-api-access-p6k2d") pod "a70563cf-4017-4654-a730-3bd13e1b3b3a" (UID: "a70563cf-4017-4654-a730-3bd13e1b3b3a"). InnerVolumeSpecName "kube-api-access-p6k2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:19:39 crc kubenswrapper[4699]: I0226 12:19:39.550155 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6k2d\" (UniqueName: \"kubernetes.io/projected/a70563cf-4017-4654-a730-3bd13e1b3b3a-kube-api-access-p6k2d\") on node \"crc\" DevicePath \"\"" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.373037 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-dmh22" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.381538 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a70563cf-4017-4654-a730-3bd13e1b3b3a" path="/var/lib/kubelet/pods/a70563cf-4017-4654-a730-3bd13e1b3b3a/volumes" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.382261 4699 scope.go:117] "RemoveContainer" containerID="3661f7766c195df1890f39782c6ee0afb458e5fb745113c3cb232308b3d30727" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.551587 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l2l5g/crc-debug-ft29p"] Feb 26 12:19:40 crc kubenswrapper[4699]: E0226 12:19:40.552229 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70563cf-4017-4654-a730-3bd13e1b3b3a" containerName="container-00" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.552250 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70563cf-4017-4654-a730-3bd13e1b3b3a" containerName="container-00" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.552493 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70563cf-4017-4654-a730-3bd13e1b3b3a" containerName="container-00" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.553379 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-ft29p" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.565749 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/858ae445-a203-46c0-b9f1-4dcf82a7b902-host\") pod \"crc-debug-ft29p\" (UID: \"858ae445-a203-46c0-b9f1-4dcf82a7b902\") " pod="openshift-must-gather-l2l5g/crc-debug-ft29p" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.565790 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvns4\" (UniqueName: \"kubernetes.io/projected/858ae445-a203-46c0-b9f1-4dcf82a7b902-kube-api-access-cvns4\") pod \"crc-debug-ft29p\" (UID: \"858ae445-a203-46c0-b9f1-4dcf82a7b902\") " pod="openshift-must-gather-l2l5g/crc-debug-ft29p" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.667467 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/858ae445-a203-46c0-b9f1-4dcf82a7b902-host\") pod \"crc-debug-ft29p\" (UID: \"858ae445-a203-46c0-b9f1-4dcf82a7b902\") " pod="openshift-must-gather-l2l5g/crc-debug-ft29p" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.667512 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvns4\" (UniqueName: \"kubernetes.io/projected/858ae445-a203-46c0-b9f1-4dcf82a7b902-kube-api-access-cvns4\") pod \"crc-debug-ft29p\" (UID: \"858ae445-a203-46c0-b9f1-4dcf82a7b902\") " pod="openshift-must-gather-l2l5g/crc-debug-ft29p" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.667565 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/858ae445-a203-46c0-b9f1-4dcf82a7b902-host\") pod \"crc-debug-ft29p\" (UID: \"858ae445-a203-46c0-b9f1-4dcf82a7b902\") " pod="openshift-must-gather-l2l5g/crc-debug-ft29p" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.685710 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvns4\" (UniqueName: \"kubernetes.io/projected/858ae445-a203-46c0-b9f1-4dcf82a7b902-kube-api-access-cvns4\") pod \"crc-debug-ft29p\" (UID: \"858ae445-a203-46c0-b9f1-4dcf82a7b902\") " pod="openshift-must-gather-l2l5g/crc-debug-ft29p" Feb 26 12:19:40 crc kubenswrapper[4699]: I0226 12:19:40.874928 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-ft29p" Feb 26 12:19:41 crc kubenswrapper[4699]: I0226 12:19:41.397197 4699 generic.go:334] "Generic (PLEG): container finished" podID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerID="7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e" exitCode=0 Feb 26 12:19:41 crc kubenswrapper[4699]: I0226 12:19:41.397274 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrd9m" event={"ID":"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf","Type":"ContainerDied","Data":"7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e"} Feb 26 12:19:41 crc kubenswrapper[4699]: I0226 12:19:41.399480 4699 generic.go:334] "Generic (PLEG): container finished" podID="858ae445-a203-46c0-b9f1-4dcf82a7b902" containerID="cbef9b8d06df85870411c16f203fb0797263277126ea2f97232cdb89a5553998" exitCode=0 Feb 26 12:19:41 crc kubenswrapper[4699]: I0226 12:19:41.399528 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/crc-debug-ft29p" event={"ID":"858ae445-a203-46c0-b9f1-4dcf82a7b902","Type":"ContainerDied","Data":"cbef9b8d06df85870411c16f203fb0797263277126ea2f97232cdb89a5553998"} Feb 26 12:19:41 crc kubenswrapper[4699]: I0226 12:19:41.399557 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/crc-debug-ft29p" event={"ID":"858ae445-a203-46c0-b9f1-4dcf82a7b902","Type":"ContainerStarted","Data":"fe8482311d347e90bb4cda3d78e7fb0585efc4b96b416a5a37d37b43b3af663f"} Feb 26 12:19:41 crc kubenswrapper[4699]: I0226 12:19:41.472905 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l2l5g/crc-debug-ft29p"] Feb 26 12:19:41 crc kubenswrapper[4699]: I0226 12:19:41.483040 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l2l5g/crc-debug-ft29p"] Feb 26 12:19:41 crc kubenswrapper[4699]: I0226 12:19:41.584665 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:19:41 crc kubenswrapper[4699]: I0226 12:19:41.584729 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:19:42 crc kubenswrapper[4699]: I0226 12:19:42.409727 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrd9m" event={"ID":"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf","Type":"ContainerStarted","Data":"ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7"} Feb 26 12:19:42 crc kubenswrapper[4699]: I0226 12:19:42.442480 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wrd9m" podStartSLOduration=3.360578438 podStartE2EDuration="17.442454066s" podCreationTimestamp="2026-02-26 12:19:25 +0000 UTC" firstStartedPulling="2026-02-26 12:19:27.74153858 +0000 UTC m=+4113.552365014" lastFinishedPulling="2026-02-26 12:19:41.823414208 +0000 UTC m=+4127.634240642" observedRunningTime="2026-02-26 12:19:42.430671812 +0000 UTC m=+4128.241498266" watchObservedRunningTime="2026-02-26 12:19:42.442454066 +0000 UTC m=+4128.253280510" Feb 26 12:19:42 crc kubenswrapper[4699]: I0226 12:19:42.662157 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-ft29p" Feb 26 12:19:42 crc kubenswrapper[4699]: I0226 12:19:42.740773 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvns4\" (UniqueName: \"kubernetes.io/projected/858ae445-a203-46c0-b9f1-4dcf82a7b902-kube-api-access-cvns4\") pod \"858ae445-a203-46c0-b9f1-4dcf82a7b902\" (UID: \"858ae445-a203-46c0-b9f1-4dcf82a7b902\") " Feb 26 12:19:42 crc kubenswrapper[4699]: I0226 12:19:42.740905 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/858ae445-a203-46c0-b9f1-4dcf82a7b902-host\") pod \"858ae445-a203-46c0-b9f1-4dcf82a7b902\" (UID: \"858ae445-a203-46c0-b9f1-4dcf82a7b902\") " Feb 26 12:19:42 crc kubenswrapper[4699]: I0226 12:19:42.741139 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/858ae445-a203-46c0-b9f1-4dcf82a7b902-host" (OuterVolumeSpecName: "host") pod "858ae445-a203-46c0-b9f1-4dcf82a7b902" (UID: "858ae445-a203-46c0-b9f1-4dcf82a7b902"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 12:19:42 crc kubenswrapper[4699]: I0226 12:19:42.741782 4699 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/858ae445-a203-46c0-b9f1-4dcf82a7b902-host\") on node \"crc\" DevicePath \"\"" Feb 26 12:19:42 crc kubenswrapper[4699]: I0226 12:19:42.750582 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/858ae445-a203-46c0-b9f1-4dcf82a7b902-kube-api-access-cvns4" (OuterVolumeSpecName: "kube-api-access-cvns4") pod "858ae445-a203-46c0-b9f1-4dcf82a7b902" (UID: "858ae445-a203-46c0-b9f1-4dcf82a7b902"). InnerVolumeSpecName "kube-api-access-cvns4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:19:42 crc kubenswrapper[4699]: I0226 12:19:42.843488 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvns4\" (UniqueName: \"kubernetes.io/projected/858ae445-a203-46c0-b9f1-4dcf82a7b902-kube-api-access-cvns4\") on node \"crc\" DevicePath \"\"" Feb 26 12:19:43 crc kubenswrapper[4699]: I0226 12:19:43.429094 4699 scope.go:117] "RemoveContainer" containerID="cbef9b8d06df85870411c16f203fb0797263277126ea2f97232cdb89a5553998" Feb 26 12:19:43 crc kubenswrapper[4699]: I0226 12:19:43.429305 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/crc-debug-ft29p" Feb 26 12:19:44 crc kubenswrapper[4699]: I0226 12:19:44.277823 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="858ae445-a203-46c0-b9f1-4dcf82a7b902" path="/var/lib/kubelet/pods/858ae445-a203-46c0-b9f1-4dcf82a7b902/volumes" Feb 26 12:19:46 crc kubenswrapper[4699]: I0226 12:19:46.039251 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:46 crc kubenswrapper[4699]: I0226 12:19:46.039598 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:47 crc kubenswrapper[4699]: I0226 12:19:47.271959 4699 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wrd9m" podUID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerName="registry-server" probeResult="failure" output=< Feb 26 12:19:47 crc kubenswrapper[4699]: timeout: failed to connect service ":50051" within 1s Feb 26 12:19:47 crc kubenswrapper[4699]: > Feb 26 12:19:56 crc kubenswrapper[4699]: I0226 12:19:56.104184 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:56 crc kubenswrapper[4699]: I0226 12:19:56.239706 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:56 crc kubenswrapper[4699]: I0226 12:19:56.903815 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wrd9m"] Feb 26 12:19:57 crc kubenswrapper[4699]: I0226 12:19:57.930931 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wrd9m" podUID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerName="registry-server" containerID="cri-o://ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7" gracePeriod=2 Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.417657 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.619680 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbd45\" (UniqueName: \"kubernetes.io/projected/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-kube-api-access-cbd45\") pod \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.620044 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-utilities\") pod \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.620243 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-catalog-content\") pod \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\" (UID: \"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf\") " Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.620840 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-utilities" (OuterVolumeSpecName: "utilities") pod "b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" (UID: "b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.621095 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.628313 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-kube-api-access-cbd45" (OuterVolumeSpecName: "kube-api-access-cbd45") pod "b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" (UID: "b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf"). InnerVolumeSpecName "kube-api-access-cbd45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.722945 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbd45\" (UniqueName: \"kubernetes.io/projected/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-kube-api-access-cbd45\") on node \"crc\" DevicePath \"\"" Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.745723 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" (UID: "b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.823891 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.943234 4699 generic.go:334] "Generic (PLEG): container finished" podID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerID="ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7" exitCode=0 Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.943280 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrd9m" event={"ID":"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf","Type":"ContainerDied","Data":"ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7"} Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.943315 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrd9m" event={"ID":"b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf","Type":"ContainerDied","Data":"d385ffaad2418df1f042a6db3f0181c3457f9bd79043978aea4ed59564bf6651"} Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.943375 4699 scope.go:117] "RemoveContainer" containerID="ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7" Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.944388 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrd9m" Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.975360 4699 scope.go:117] "RemoveContainer" containerID="7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e" Feb 26 12:19:58 crc kubenswrapper[4699]: I0226 12:19:58.994717 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wrd9m"] Feb 26 12:19:59 crc kubenswrapper[4699]: I0226 12:19:59.003071 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wrd9m"] Feb 26 12:19:59 crc kubenswrapper[4699]: I0226 12:19:59.006230 4699 scope.go:117] "RemoveContainer" containerID="2daebaa5370afaad83a99ae877015f25f2523dc3704a3d0897f2dfb9d894739f" Feb 26 12:19:59 crc kubenswrapper[4699]: I0226 12:19:59.258350 4699 scope.go:117] "RemoveContainer" containerID="ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7" Feb 26 12:19:59 crc kubenswrapper[4699]: E0226 12:19:59.258786 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7\": container with ID starting with ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7 not found: ID does not exist" containerID="ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7" Feb 26 12:19:59 crc kubenswrapper[4699]: I0226 12:19:59.258824 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7"} err="failed to get container status \"ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7\": rpc error: code = NotFound desc = could not find container \"ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7\": container with ID starting with ed1cca4cfd46370f9a4e1dd43a9fe5c873f3c94e168cab7d19373995433384a7 not found: ID does not exist" Feb 26 12:19:59 crc kubenswrapper[4699]: I0226 12:19:59.258845 4699 scope.go:117] "RemoveContainer" containerID="7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e" Feb 26 12:19:59 crc kubenswrapper[4699]: E0226 12:19:59.259080 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e\": container with ID starting with 7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e not found: ID does not exist" containerID="7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e" Feb 26 12:19:59 crc kubenswrapper[4699]: I0226 12:19:59.259161 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e"} err="failed to get container status \"7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e\": rpc error: code = NotFound desc = could not find container \"7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e\": container with ID starting with 7de7cd2371d3abbfbf7615ebcd3acb90e41a2f685f589eb3f400c395ba8b7d8e not found: ID does not exist" Feb 26 12:19:59 crc kubenswrapper[4699]: I0226 12:19:59.259174 4699 scope.go:117] "RemoveContainer" containerID="2daebaa5370afaad83a99ae877015f25f2523dc3704a3d0897f2dfb9d894739f" Feb 26 12:19:59 crc kubenswrapper[4699]: E0226 12:19:59.260266 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2daebaa5370afaad83a99ae877015f25f2523dc3704a3d0897f2dfb9d894739f\": container with ID starting with 2daebaa5370afaad83a99ae877015f25f2523dc3704a3d0897f2dfb9d894739f not found: ID does not exist" containerID="2daebaa5370afaad83a99ae877015f25f2523dc3704a3d0897f2dfb9d894739f" Feb 26 12:19:59 crc kubenswrapper[4699]: I0226 12:19:59.260306 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2daebaa5370afaad83a99ae877015f25f2523dc3704a3d0897f2dfb9d894739f"} err="failed to get container status \"2daebaa5370afaad83a99ae877015f25f2523dc3704a3d0897f2dfb9d894739f\": rpc error: code = NotFound desc = could not find container \"2daebaa5370afaad83a99ae877015f25f2523dc3704a3d0897f2dfb9d894739f\": container with ID starting with 2daebaa5370afaad83a99ae877015f25f2523dc3704a3d0897f2dfb9d894739f not found: ID does not exist" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.154455 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535140-wg97p"] Feb 26 12:20:00 crc kubenswrapper[4699]: E0226 12:20:00.155000 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerName="extract-utilities" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.155028 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerName="extract-utilities" Feb 26 12:20:00 crc kubenswrapper[4699]: E0226 12:20:00.155059 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="858ae445-a203-46c0-b9f1-4dcf82a7b902" containerName="container-00" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.155068 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="858ae445-a203-46c0-b9f1-4dcf82a7b902" containerName="container-00" Feb 26 12:20:00 crc kubenswrapper[4699]: E0226 12:20:00.155083 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerName="registry-server" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.155090 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerName="registry-server" Feb 26 12:20:00 crc kubenswrapper[4699]: E0226 12:20:00.155132 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerName="extract-content" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.155139 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerName="extract-content" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.155450 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" containerName="registry-server" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.155473 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="858ae445-a203-46c0-b9f1-4dcf82a7b902" containerName="container-00" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.156404 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535140-wg97p" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.161323 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.161617 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.161783 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.175763 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535140-wg97p"] Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.271682 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf" path="/var/lib/kubelet/pods/b51326a2-4ccd-48ce-8c3b-e6fb9ffc56bf/volumes" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.339174 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hrsf\" (UniqueName: \"kubernetes.io/projected/924cba42-fd14-4d50-815d-0d8fa83c6b06-kube-api-access-7hrsf\") pod \"auto-csr-approver-29535140-wg97p\" (UID: \"924cba42-fd14-4d50-815d-0d8fa83c6b06\") " pod="openshift-infra/auto-csr-approver-29535140-wg97p" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.441444 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hrsf\" (UniqueName: \"kubernetes.io/projected/924cba42-fd14-4d50-815d-0d8fa83c6b06-kube-api-access-7hrsf\") pod \"auto-csr-approver-29535140-wg97p\" (UID: \"924cba42-fd14-4d50-815d-0d8fa83c6b06\") " pod="openshift-infra/auto-csr-approver-29535140-wg97p" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.479699 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hrsf\" (UniqueName: \"kubernetes.io/projected/924cba42-fd14-4d50-815d-0d8fa83c6b06-kube-api-access-7hrsf\") pod \"auto-csr-approver-29535140-wg97p\" (UID: \"924cba42-fd14-4d50-815d-0d8fa83c6b06\") " pod="openshift-infra/auto-csr-approver-29535140-wg97p" Feb 26 12:20:00 crc kubenswrapper[4699]: I0226 12:20:00.776379 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535140-wg97p" Feb 26 12:20:01 crc kubenswrapper[4699]: I0226 12:20:01.608572 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535140-wg97p"] Feb 26 12:20:02 crc kubenswrapper[4699]: I0226 12:20:02.143752 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535140-wg97p" event={"ID":"924cba42-fd14-4d50-815d-0d8fa83c6b06","Type":"ContainerStarted","Data":"86525d46e644f97de7bd2d890833add4ab8654ccebe3f730334374a10853b020"} Feb 26 12:20:06 crc kubenswrapper[4699]: I0226 12:20:06.179874 4699 generic.go:334] "Generic (PLEG): container finished" podID="924cba42-fd14-4d50-815d-0d8fa83c6b06" containerID="74ea3c51dc439314ff3bb87ede5fd5f905e28e2682d357fd7d7822dde4facddf" exitCode=0 Feb 26 12:20:06 crc kubenswrapper[4699]: I0226 12:20:06.180082 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535140-wg97p" event={"ID":"924cba42-fd14-4d50-815d-0d8fa83c6b06","Type":"ContainerDied","Data":"74ea3c51dc439314ff3bb87ede5fd5f905e28e2682d357fd7d7822dde4facddf"} Feb 26 12:20:07 crc kubenswrapper[4699]: I0226 12:20:07.864588 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535140-wg97p" Feb 26 12:20:08 crc kubenswrapper[4699]: I0226 12:20:08.007405 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hrsf\" (UniqueName: \"kubernetes.io/projected/924cba42-fd14-4d50-815d-0d8fa83c6b06-kube-api-access-7hrsf\") pod \"924cba42-fd14-4d50-815d-0d8fa83c6b06\" (UID: \"924cba42-fd14-4d50-815d-0d8fa83c6b06\") " Feb 26 12:20:08 crc kubenswrapper[4699]: I0226 12:20:08.013771 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924cba42-fd14-4d50-815d-0d8fa83c6b06-kube-api-access-7hrsf" (OuterVolumeSpecName: "kube-api-access-7hrsf") pod "924cba42-fd14-4d50-815d-0d8fa83c6b06" (UID: "924cba42-fd14-4d50-815d-0d8fa83c6b06"). InnerVolumeSpecName "kube-api-access-7hrsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:20:08 crc kubenswrapper[4699]: I0226 12:20:08.110010 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hrsf\" (UniqueName: \"kubernetes.io/projected/924cba42-fd14-4d50-815d-0d8fa83c6b06-kube-api-access-7hrsf\") on node \"crc\" DevicePath \"\"" Feb 26 12:20:08 crc kubenswrapper[4699]: I0226 12:20:08.201054 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535140-wg97p" event={"ID":"924cba42-fd14-4d50-815d-0d8fa83c6b06","Type":"ContainerDied","Data":"86525d46e644f97de7bd2d890833add4ab8654ccebe3f730334374a10853b020"} Feb 26 12:20:08 crc kubenswrapper[4699]: I0226 12:20:08.201106 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86525d46e644f97de7bd2d890833add4ab8654ccebe3f730334374a10853b020" Feb 26 12:20:08 crc kubenswrapper[4699]: I0226 12:20:08.201125 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535140-wg97p" Feb 26 12:20:09 crc kubenswrapper[4699]: I0226 12:20:09.259469 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535134-bh8fn"] Feb 26 12:20:09 crc kubenswrapper[4699]: I0226 12:20:09.268722 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535134-bh8fn"] Feb 26 12:20:10 crc kubenswrapper[4699]: I0226 12:20:10.271986 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916cd984-33ed-4299-ade5-5064478d656f" path="/var/lib/kubelet/pods/916cd984-33ed-4299-ade5-5064478d656f/volumes" Feb 26 12:20:11 crc kubenswrapper[4699]: I0226 12:20:11.585376 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:20:11 crc kubenswrapper[4699]: I0226 12:20:11.585479 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:20:11 crc kubenswrapper[4699]: I0226 12:20:11.844609 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-977f89944-b96zk_dd004e01-9dac-4316-b6ee-05c1a0f20713/barbican-api/0.log" Feb 26 12:20:12 crc kubenswrapper[4699]: I0226 12:20:12.223986 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-977f89944-b96zk_dd004e01-9dac-4316-b6ee-05c1a0f20713/barbican-api-log/0.log" Feb 26 12:20:12 crc kubenswrapper[4699]: I0226 12:20:12.241099 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb8c656f4-cl8tt_770f4ffe-352c-416b-8f67-a894c4107003/barbican-keystone-listener/0.log" Feb 26 12:20:12 crc kubenswrapper[4699]: I0226 12:20:12.340594 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb8c656f4-cl8tt_770f4ffe-352c-416b-8f67-a894c4107003/barbican-keystone-listener-log/0.log" Feb 26 12:20:12 crc kubenswrapper[4699]: I0226 12:20:12.492397 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6596b66679-qmv4f_edb59470-4038-48c2-a3ec-f3046406a971/barbican-worker-log/0.log" Feb 26 12:20:12 crc kubenswrapper[4699]: I0226 12:20:12.517539 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6596b66679-qmv4f_edb59470-4038-48c2-a3ec-f3046406a971/barbican-worker/0.log" Feb 26 12:20:12 crc kubenswrapper[4699]: I0226 12:20:12.728726 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pl6wj_fee4a36b-0896-43c1-9b23-3da3ae870cbe/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:12 crc kubenswrapper[4699]: I0226 12:20:12.789764 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_09a6eb79-27c3-465b-adae-b32d96c56b65/ceilometer-central-agent/0.log" Feb 26 12:20:13 crc kubenswrapper[4699]: I0226 12:20:13.348088 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_09a6eb79-27c3-465b-adae-b32d96c56b65/ceilometer-notification-agent/0.log" Feb 26 12:20:13 crc kubenswrapper[4699]: I0226 12:20:13.442464 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_09a6eb79-27c3-465b-adae-b32d96c56b65/proxy-httpd/0.log" Feb 26 12:20:13 crc kubenswrapper[4699]: I0226 12:20:13.554871 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_09a6eb79-27c3-465b-adae-b32d96c56b65/sg-core/0.log" Feb 26 12:20:13 crc kubenswrapper[4699]: I0226 12:20:13.603534 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c2c2d2c1-e68e-4b14-a732-3b42a6132503/cinder-api/0.log" Feb 26 12:20:13 crc kubenswrapper[4699]: I0226 12:20:13.739097 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c2c2d2c1-e68e-4b14-a732-3b42a6132503/cinder-api-log/0.log" Feb 26 12:20:13 crc kubenswrapper[4699]: I0226 12:20:13.819994 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fbf1f488-444f-45d3-b5e6-44506bf45f8e/cinder-scheduler/0.log" Feb 26 12:20:13 crc kubenswrapper[4699]: I0226 12:20:13.862639 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fbf1f488-444f-45d3-b5e6-44506bf45f8e/probe/0.log" Feb 26 12:20:14 crc kubenswrapper[4699]: I0226 12:20:14.643393 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-86gl7_b1a06be0-15ce-4abd-b9e7-7e11e789bd64/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:14 crc kubenswrapper[4699]: I0226 12:20:14.659759 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-h9q25_85e0d37e-fb25-4bbc-afe5-7e6ab304390c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:14 crc kubenswrapper[4699]: I0226 12:20:14.849667 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-hddfn_24dd88a8-4737-4ebc-8925-b2bcedb760c2/init/0.log" Feb 26 12:20:15 crc kubenswrapper[4699]: I0226 12:20:15.109427 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-hddfn_24dd88a8-4737-4ebc-8925-b2bcedb760c2/dnsmasq-dns/0.log" Feb 26 12:20:15 crc kubenswrapper[4699]: I0226 12:20:15.298061 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-hddfn_24dd88a8-4737-4ebc-8925-b2bcedb760c2/init/0.log" Feb 26 12:20:15 crc kubenswrapper[4699]: I0226 12:20:15.312364 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-f97wz_8d139dcb-cb7b-4711-a8a6-62e27c3cd7e2/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:15 crc kubenswrapper[4699]: I0226 12:20:15.645162 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9c58ea0a-4ad4-47cf-8976-a004ef7e56da/glance-httpd/0.log" Feb 26 12:20:15 crc kubenswrapper[4699]: I0226 12:20:15.715130 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9c58ea0a-4ad4-47cf-8976-a004ef7e56da/glance-log/0.log" Feb 26 12:20:15 crc kubenswrapper[4699]: I0226 12:20:15.844941 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_796738f1-8a6c-4e91-bdfe-bee2f252b3fc/glance-log/0.log" Feb 26 12:20:15 crc kubenswrapper[4699]: I0226 12:20:15.892575 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_796738f1-8a6c-4e91-bdfe-bee2f252b3fc/glance-httpd/0.log" Feb 26 12:20:16 crc kubenswrapper[4699]: I0226 12:20:16.486504 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5795557cd8-dvzqq_15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0/horizon/0.log" Feb 26 12:20:16 crc kubenswrapper[4699]: I0226 12:20:16.649143 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-hp4pv_e537c30c-dc6b-406f-bb86-5540ebd8a36d/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:16 crc kubenswrapper[4699]: I0226 12:20:16.679932 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5795557cd8-dvzqq_15a1a0c7-f6c0-46a5-86f9-a0dec5a257e0/horizon-log/0.log" Feb 26 12:20:16 crc kubenswrapper[4699]: I0226 12:20:16.745399 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-mlb2f_ac66647f-74c0-4a4e-9925-e47cd90568a1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:16 crc kubenswrapper[4699]: I0226 12:20:16.955389 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29535121-plvtd_ef1e8bd7-66e8-4eef-979e-8bf3e57b2a68/keystone-cron/0.log" Feb 26 12:20:16 crc kubenswrapper[4699]: I0226 12:20:16.993619 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-67d4f89fb9-65kmq_5d9e1983-3363-4542-a5f0-deb132ea6994/keystone-api/0.log" Feb 26 12:20:17 crc kubenswrapper[4699]: I0226 12:20:17.091346 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c685fadd-b283-40bc-9de2-3372317b9875/kube-state-metrics/0.log" Feb 26 12:20:17 crc kubenswrapper[4699]: I0226 12:20:17.148673 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-9wl2f_6436c321-6850-4db3-81b2-0dc329e10900/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:17 crc kubenswrapper[4699]: I0226 12:20:17.574761 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d45896d49-mh862_862cb546-78f8-4864-a158-9dc217ec2796/neutron-httpd/0.log" Feb 26 12:20:17 crc kubenswrapper[4699]: I0226 12:20:17.658219 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-dbp7l_59456382-a459-4f82-ac99-b96eb735ddb9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:17 crc kubenswrapper[4699]: I0226 12:20:17.678864 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d45896d49-mh862_862cb546-78f8-4864-a158-9dc217ec2796/neutron-api/0.log" Feb 26 12:20:18 crc kubenswrapper[4699]: I0226 12:20:18.257440 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2d0d807f-7fdc-4239-b7bb-1952c2f7c222/nova-api-log/0.log" Feb 26 12:20:18 crc kubenswrapper[4699]: I0226 12:20:18.369109 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2ff15a2d-962f-421b-be00-e3bf6ef22612/nova-cell0-conductor-conductor/0.log" Feb 26 12:20:18 crc kubenswrapper[4699]: I0226 12:20:18.572039 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2d0d807f-7fdc-4239-b7bb-1952c2f7c222/nova-api-api/0.log" Feb 26 12:20:18 crc kubenswrapper[4699]: I0226 12:20:18.620852 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ff2b3846-c197-4cc6-a442-0f466d97d53d/nova-cell1-conductor-conductor/0.log" Feb 26 12:20:18 crc kubenswrapper[4699]: I0226 12:20:18.726590 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8bb28763-ceae-456c-a0d6-5df33b478106/nova-cell1-novncproxy-novncproxy/0.log" Feb 26 12:20:18 crc kubenswrapper[4699]: I0226 12:20:18.911124 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-wv666_2c2e8329-038c-4347-b30f-f8b42f36cc67/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:19 crc kubenswrapper[4699]: I0226 12:20:19.273017 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_15752dfa-4afb-412f-99a0-75c5fe76f6a8/nova-metadata-log/0.log" Feb 26 12:20:19 crc kubenswrapper[4699]: I0226 12:20:19.532677 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9d8371db-373f-4a41-97cb-b2d00aa17571/nova-scheduler-scheduler/0.log" Feb 26 12:20:19 crc kubenswrapper[4699]: I0226 12:20:19.533857 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_edce8e75-6dd5-4fbd-8f76-bc6553cc27b9/mysql-bootstrap/0.log" Feb 26 12:20:19 crc kubenswrapper[4699]: I0226 12:20:19.691527 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_edce8e75-6dd5-4fbd-8f76-bc6553cc27b9/mysql-bootstrap/0.log" Feb 26 12:20:19 crc kubenswrapper[4699]: I0226 12:20:19.782910 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_edce8e75-6dd5-4fbd-8f76-bc6553cc27b9/galera/0.log" Feb 26 12:20:19 crc kubenswrapper[4699]: I0226 12:20:19.914815 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6fdc6b6d-ac77-4179-9864-f220d622c0f4/mysql-bootstrap/0.log" Feb 26 12:20:20 crc kubenswrapper[4699]: I0226 12:20:20.119940 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6fdc6b6d-ac77-4179-9864-f220d622c0f4/mysql-bootstrap/0.log" Feb 26 12:20:20 crc kubenswrapper[4699]: I0226 12:20:20.197336 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6fdc6b6d-ac77-4179-9864-f220d622c0f4/galera/0.log" Feb 26 12:20:20 crc kubenswrapper[4699]: I0226 12:20:20.323527 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_16db7cc3-bd7c-44aa-b92f-d2a645d96ef0/openstackclient/0.log" Feb 26 12:20:20 crc kubenswrapper[4699]: I0226 12:20:20.516076 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qfxsz_a4767003-9eba-4b86-933c-5bcbaa93e458/openstack-network-exporter/0.log" Feb 26 12:20:20 crc kubenswrapper[4699]: I0226 12:20:20.717974 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_15752dfa-4afb-412f-99a0-75c5fe76f6a8/nova-metadata-metadata/0.log" Feb 26 12:20:20 crc kubenswrapper[4699]: I0226 12:20:20.719141 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nrvng_cd4015f0-f1a7-40d7-ae69-089f74a6873d/ovn-controller/0.log" Feb 26 12:20:20 crc kubenswrapper[4699]: I0226 12:20:20.804518 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gxnxl_8afc038e-11dc-4959-a6b0-61e9b1c2dc35/ovsdb-server-init/0.log" Feb 26 12:20:21 crc kubenswrapper[4699]: I0226 12:20:21.020782 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gxnxl_8afc038e-11dc-4959-a6b0-61e9b1c2dc35/ovs-vswitchd/0.log" Feb 26 12:20:21 crc kubenswrapper[4699]: I0226 12:20:21.022478 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gxnxl_8afc038e-11dc-4959-a6b0-61e9b1c2dc35/ovsdb-server-init/0.log" Feb 26 12:20:21 crc kubenswrapper[4699]: I0226 12:20:21.121185 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gxnxl_8afc038e-11dc-4959-a6b0-61e9b1c2dc35/ovsdb-server/0.log" Feb 26 12:20:21 crc kubenswrapper[4699]: I0226 12:20:21.248601 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-hmpqg_dd7a61f4-47ba-4b2f-9015-c6ab27ff3c6b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:21 crc kubenswrapper[4699]: I0226 12:20:21.283571 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8fbd47d6-02c1-4ac4-a981-231eb0f13530/openstack-network-exporter/0.log" Feb 26 12:20:21 crc kubenswrapper[4699]: I0226 12:20:21.320870 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8fbd47d6-02c1-4ac4-a981-231eb0f13530/ovn-northd/0.log" Feb 26 12:20:21 crc kubenswrapper[4699]: I0226 12:20:21.507818 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ef805480-81ec-4d0b-b2ca-06db4bf74383/openstack-network-exporter/0.log" Feb 26 12:20:21 crc kubenswrapper[4699]: I0226 12:20:21.538134 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ef805480-81ec-4d0b-b2ca-06db4bf74383/ovsdbserver-nb/0.log" Feb 26 12:20:21 crc kubenswrapper[4699]: I0226 12:20:21.725459 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b981c8a5-ce76-4bc1-a018-28255391e3f2/openstack-network-exporter/0.log" Feb 26 12:20:21 crc kubenswrapper[4699]: I0226 12:20:21.733256 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b981c8a5-ce76-4bc1-a018-28255391e3f2/ovsdbserver-sb/0.log" Feb 26 12:20:21 crc kubenswrapper[4699]: I0226 12:20:21.941405 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d4878dd78-qpvzg_b7700bd0-21d8-4b96-9753-2619443038a3/placement-api/0.log" Feb 26 12:20:22 crc kubenswrapper[4699]: I0226 12:20:22.038842 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b731314-eb90-4a19-a425-2f9282af2a7f/setup-container/0.log" Feb 26 12:20:22 crc kubenswrapper[4699]: I0226 12:20:22.094481 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d4878dd78-qpvzg_b7700bd0-21d8-4b96-9753-2619443038a3/placement-log/0.log" Feb 26 12:20:22 crc kubenswrapper[4699]: I0226 12:20:22.202961 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b731314-eb90-4a19-a425-2f9282af2a7f/setup-container/0.log" Feb 26 12:20:22 crc kubenswrapper[4699]: I0226 12:20:22.267756 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b731314-eb90-4a19-a425-2f9282af2a7f/rabbitmq/0.log" Feb 26 12:20:22 crc kubenswrapper[4699]: I0226 12:20:22.416480 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0d9b2e6e-c43b-49ae-a71e-844610621e3e/setup-container/0.log" Feb 26 12:20:22 crc kubenswrapper[4699]: I0226 12:20:22.837772 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0d9b2e6e-c43b-49ae-a71e-844610621e3e/setup-container/0.log" Feb 26 12:20:22 crc kubenswrapper[4699]: I0226 12:20:22.866035 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0d9b2e6e-c43b-49ae-a71e-844610621e3e/rabbitmq/0.log" Feb 26 12:20:22 crc kubenswrapper[4699]: I0226 12:20:22.904383 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-lqq4l_a1aabb80-3c23-4f5a-9bd1-4d573089856c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:23 crc kubenswrapper[4699]: I0226 12:20:23.084351 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zdf2z_fcea0fcf-0c80-4334-9327-f0a57b385cc9/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:23 crc kubenswrapper[4699]: I0226 12:20:23.177604 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mqd9n_57bbec48-f33e-43b8-9f82-8cc3a42e7723/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:23 crc kubenswrapper[4699]: I0226 12:20:23.310666 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8w2tv_96b6beba-4e99-4cb7-b49b-3f211c5e12b7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:23 crc kubenswrapper[4699]: I0226 12:20:23.417742 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-t4sjg_2930a730-d5e2-49e1-a618-7428b999a73d/ssh-known-hosts-edpm-deployment/0.log" Feb 26 12:20:23 crc kubenswrapper[4699]: I0226 12:20:23.622056 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78cbc76b59-m6shv_5a4ece68-df2a-480c-9531-1d133d7f4bd0/proxy-server/0.log" Feb 26 12:20:23 crc kubenswrapper[4699]: I0226 12:20:23.736689 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78cbc76b59-m6shv_5a4ece68-df2a-480c-9531-1d133d7f4bd0/proxy-httpd/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.307390 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-lqqdx_9125ee3a-a0b6-469b-b79d-3a376f2d5d91/swift-ring-rebalance/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.317111 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/account-auditor/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.356798 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/account-reaper/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.500051 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/account-server/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.544531 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/account-replicator/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.611524 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/container-auditor/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.640661 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/container-replicator/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.715569 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/container-server/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.770885 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/container-updater/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.815792 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/object-auditor/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.912878 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/object-expirer/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.913227 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/object-replicator/0.log" Feb 26 12:20:24 crc kubenswrapper[4699]: I0226 12:20:24.967942 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/object-server/0.log" Feb 26 12:20:25 crc kubenswrapper[4699]: I0226 12:20:25.060394 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/object-updater/0.log" Feb 26 12:20:25 crc kubenswrapper[4699]: I0226 12:20:25.130922 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/swift-recon-cron/0.log" Feb 26 12:20:25 crc kubenswrapper[4699]: I0226 12:20:25.142760 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f23ec57b-7ab1-4152-8108-e0e27b4ba95c/rsync/0.log" Feb 26 12:20:25 crc kubenswrapper[4699]: I0226 12:20:25.312247 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-zkpz9_08bdd16a-fc18-4262-9175-a05b613a76c9/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:26 crc kubenswrapper[4699]: I0226 12:20:26.107588 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_66beadbe-fd5d-48af-8a33-8a652c8d1c71/test-operator-logs-container/0.log" Feb 26 12:20:26 crc kubenswrapper[4699]: I0226 12:20:26.121390 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_19e02200-91be-49f8-8174-4a0bf6cda9dd/tempest-tests-tempest-tests-runner/0.log" Feb 26 12:20:26 crc kubenswrapper[4699]: I0226 12:20:26.351496 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9npsm_974c869a-b430-4a83-81d0-ece37d67c0b0/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 12:20:35 crc kubenswrapper[4699]: I0226 12:20:35.335403 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6530fcf8-efdc-4f91-96cb-4f4bdc8bd1d2/memcached/0.log" Feb 26 12:20:40 crc kubenswrapper[4699]: I0226 12:20:40.494281 4699 scope.go:117] "RemoveContainer" containerID="ae1928085c149280cf3addf69107c792048518ecf95f2de337f2886f53e0e594" Feb 26 12:20:41 crc kubenswrapper[4699]: I0226 12:20:41.585102 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:20:41 crc kubenswrapper[4699]: I0226 12:20:41.585487 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:20:41 crc kubenswrapper[4699]: I0226 12:20:41.585537 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 12:20:41 crc kubenswrapper[4699]: I0226 12:20:41.586382 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"321e80bae8579e8007aa1cc495575fd7eef57d9379aadf703c862dea223958e5"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:20:41 crc kubenswrapper[4699]: I0226 12:20:41.586432 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://321e80bae8579e8007aa1cc495575fd7eef57d9379aadf703c862dea223958e5" gracePeriod=600 Feb 26 12:20:42 crc kubenswrapper[4699]: I0226 12:20:42.572551 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="321e80bae8579e8007aa1cc495575fd7eef57d9379aadf703c862dea223958e5" exitCode=0 Feb 26 12:20:42 crc kubenswrapper[4699]: I0226 12:20:42.572642 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"321e80bae8579e8007aa1cc495575fd7eef57d9379aadf703c862dea223958e5"} Feb 26 12:20:42 crc kubenswrapper[4699]: I0226 12:20:42.573530 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerStarted","Data":"db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30"} Feb 26 12:20:42 crc kubenswrapper[4699]: I0226 12:20:42.573560 4699 scope.go:117] "RemoveContainer" containerID="994e7fffe13fcdad450e5df047474260de901881a302afaa9d85c3116c5763b7" Feb 26 12:20:55 crc kubenswrapper[4699]: I0226 12:20:55.252026 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-4k4sm_07c2552c-8182-4cfe-a397-39ad287029e5/manager/0.log" Feb 26 12:20:55 crc kubenswrapper[4699]: I0226 12:20:55.474267 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/util/0.log" Feb 26 12:20:55 crc kubenswrapper[4699]: I0226 12:20:55.665572 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/pull/0.log" Feb 26 12:20:55 crc kubenswrapper[4699]: I0226 12:20:55.720355 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/util/0.log" Feb 26 12:20:56 crc kubenswrapper[4699]: I0226 12:20:56.063865 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/pull/0.log" Feb 26 12:20:56 crc kubenswrapper[4699]: I0226 12:20:56.251408 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/util/0.log" Feb 26 12:20:56 crc kubenswrapper[4699]: I0226 12:20:56.255325 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/pull/0.log" Feb 26 12:20:56 crc kubenswrapper[4699]: I0226 12:20:56.438930 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfb67d78cf82038cea84fa4b5c0f8697740883dc99ef57c523054710b1xdxm8_449351cd-8256-4e21-b27e-be3c4db11ca5/extract/0.log" Feb 26 12:20:56 crc kubenswrapper[4699]: I0226 12:20:56.776930 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-jh7vz_27e251bb-8f9b-48d4-9ea3-81d03fd85244/manager/0.log" Feb 26 12:20:57 crc kubenswrapper[4699]: I0226 12:20:57.002556 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-t8c9f_7b204025-d5ff-4c74-96b9-6774b62e0cc4/manager/0.log" Feb 26 12:20:57 crc kubenswrapper[4699]: I0226 12:20:57.286994 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-qf9vd_619dff06-7255-4aab-9ffe-9f2561bcc904/manager/0.log" Feb 26 12:20:57 crc kubenswrapper[4699]: I0226 12:20:57.434207 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-xw85z_35555f68-d5c4-44b2-9dfa-af5f91f57c7c/manager/0.log" Feb 26 12:20:57 crc kubenswrapper[4699]: I0226 12:20:57.664053 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-5k85p_d56efcbf-3414-4bd1-9cbf-d56c434ac529/manager/0.log" Feb 26 12:20:57 crc kubenswrapper[4699]: I0226 12:20:57.878911 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-mtrs6_afbeb2d8-c332-447b-a931-9fe7b246914d/manager/0.log" Feb 26 12:20:57 crc kubenswrapper[4699]: I0226 12:20:57.974300 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-d2pxc_a2c419ab-2a99-4d37-b46c-b84024f24b2e/manager/0.log" Feb 26 12:20:58 crc kubenswrapper[4699]: I0226 12:20:58.138442 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-9gwwj_caabfe5b-420c-47c7-9ed2-b4ac9b2d54f2/manager/0.log" Feb 26 12:20:58 crc kubenswrapper[4699]: I0226 12:20:58.317926 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-95whc_38eef260-c32f-4568-9936-6197ba984f05/manager/0.log" Feb 26 12:20:58 crc kubenswrapper[4699]: I0226 12:20:58.678012 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-6gblm_54959b79-361c-415a-986d-1af6d8eb6701/manager/0.log" Feb 26 12:20:58 crc kubenswrapper[4699]: I0226 12:20:58.717854 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-4mghs_0ca72d8e-ed5f-47e4-b9c0-62b41fd687ee/manager/0.log" Feb 26 12:20:58 crc kubenswrapper[4699]: I0226 12:20:58.734251 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-2wj2n_a6e7ca85-e18b-4605-9180-316f65b82006/manager/0.log" Feb 26 12:20:58 crc kubenswrapper[4699]: I0226 12:20:58.878038 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cktpfb_ce7c40ca-05ad-49ca-a091-02ac588c3eb7/manager/0.log" Feb 26 12:20:59 crc kubenswrapper[4699]: I0226 12:20:59.217905 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7c5cc54f9c-wjrrd_3a6d1210-ece5-4666-80bf-c7c7821e441c/operator/0.log" Feb 26 12:20:59 crc kubenswrapper[4699]: I0226 12:20:59.343492 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gmh8j_22cfe789-87ae-4b23-91c2-cbb5112e4285/registry-server/0.log" Feb 26 12:20:59 crc kubenswrapper[4699]: I0226 12:20:59.472136 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-96png_a90c4025-7bd1-401b-8f92-5f15a58fb3d6/manager/0.log" Feb 26 12:20:59 crc kubenswrapper[4699]: I0226 12:20:59.735204 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-jxr77_7545763d-d2d2-4b6e-980d-737062f0a894/manager/0.log" Feb 26 12:20:59 crc kubenswrapper[4699]: I0226 12:20:59.808253 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ghqf4_8d440653-f1c3-483c-a37d-463dcfc15224/operator/0.log" Feb 26 12:20:59 crc kubenswrapper[4699]: I0226 12:20:59.985700 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-bqvxr_33fc0a61-18c9-4e80-b898-92a5b1b71dac/manager/0.log" Feb 26 12:21:00 crc kubenswrapper[4699]: I0226 12:21:00.448472 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-mwvnr_5be0c14a-e51f-4b69-ab58-c0cac66910e2/manager/0.log" Feb 26 12:21:00 crc kubenswrapper[4699]: I0226 12:21:00.483482 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-f9kz5_15255a9b-0767-4518-8e81-ca9044f9190a/manager/0.log" Feb 26 12:21:00 crc kubenswrapper[4699]: I0226 12:21:00.693107 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-fnnc7_a2b3bf3b-a815-4033-983b-eedc16b8609f/manager/0.log" Feb 26 12:21:00 crc kubenswrapper[4699]: I0226 12:21:00.882721 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-947f4f86b-m69sv_ebf1a568-be30-4ceb-bc67-e3158a0280b9/manager/0.log" Feb 26 12:21:05 crc kubenswrapper[4699]: I0226 12:21:05.177778 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-sndb9_1814471e-5f82-4464-9528-75da66d7235b/manager/0.log" Feb 26 12:21:23 crc kubenswrapper[4699]: I0226 12:21:23.540183 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-p9wj4_bad776f4-e24b-41f1-88d8-2b1fe6258783/control-plane-machine-set-operator/0.log" Feb 26 12:21:23 crc kubenswrapper[4699]: I0226 12:21:23.626714 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pw64v_5d015dd8-56c9-4f61-b133-4951cda91ca5/kube-rbac-proxy/0.log" Feb 26 12:21:23 crc kubenswrapper[4699]: I0226 12:21:23.647958 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pw64v_5d015dd8-56c9-4f61-b133-4951cda91ca5/machine-api-operator/0.log" Feb 26 12:21:36 crc kubenswrapper[4699]: I0226 12:21:36.352136 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-fhn2n_fc42522b-c5f4-4df2-8435-3e3985dd960c/cert-manager-controller/0.log" Feb 26 12:21:36 crc kubenswrapper[4699]: I0226 12:21:36.529445 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-dswxp_f026799a-39c7-443e-9801-f046ba8ae94b/cert-manager-cainjector/0.log" Feb 26 12:21:36 crc kubenswrapper[4699]: I0226 12:21:36.667721 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-l2fdt_fad1f923-b22c-4c0d-9eb9-684636bc76c0/cert-manager-webhook/0.log" Feb 26 12:21:50 crc kubenswrapper[4699]: I0226 12:21:50.374479 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-7f4bx_13fc1aa0-a043-4b42-952b-7f718ff577d2/nmstate-console-plugin/0.log" Feb 26 12:21:50 crc kubenswrapper[4699]: I0226 12:21:50.581810 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5jrwg_80de38f0-8620-4e27-988e-6d85d7c8bc24/nmstate-handler/0.log" Feb 26 12:21:50 crc kubenswrapper[4699]: I0226 12:21:50.670530 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-jnrsc_c4897df9-3a79-41bf-a7ba-7a72d888f8e1/kube-rbac-proxy/0.log" Feb 26 12:21:50 crc kubenswrapper[4699]: I0226 12:21:50.752752 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-jnrsc_c4897df9-3a79-41bf-a7ba-7a72d888f8e1/nmstate-metrics/0.log" Feb 26 12:21:50 crc kubenswrapper[4699]: I0226 12:21:50.899225 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-8l8n8_15312afe-49aa-4681-8513-6ed9c774d222/nmstate-operator/0.log" Feb 26 12:21:50 crc kubenswrapper[4699]: I0226 12:21:50.966377 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-qmw66_d674e733-7357-43e5-be9c-4d4e9bad252c/nmstate-webhook/0.log" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.142739 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535142-vfnhz"] Feb 26 12:22:00 crc kubenswrapper[4699]: E0226 12:22:00.144166 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924cba42-fd14-4d50-815d-0d8fa83c6b06" containerName="oc" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.144181 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="924cba42-fd14-4d50-815d-0d8fa83c6b06" containerName="oc" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.144432 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="924cba42-fd14-4d50-815d-0d8fa83c6b06" containerName="oc" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.145084 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535142-vfnhz" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.148245 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.148535 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.148746 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.178254 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535142-vfnhz"] Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.252101 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9wm\" (UniqueName: \"kubernetes.io/projected/8a1ba6a1-6a82-47c4-9706-f77275f34d3a-kube-api-access-jn9wm\") pod \"auto-csr-approver-29535142-vfnhz\" (UID: \"8a1ba6a1-6a82-47c4-9706-f77275f34d3a\") " pod="openshift-infra/auto-csr-approver-29535142-vfnhz" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.354162 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9wm\" (UniqueName: \"kubernetes.io/projected/8a1ba6a1-6a82-47c4-9706-f77275f34d3a-kube-api-access-jn9wm\") pod \"auto-csr-approver-29535142-vfnhz\" (UID: \"8a1ba6a1-6a82-47c4-9706-f77275f34d3a\") " pod="openshift-infra/auto-csr-approver-29535142-vfnhz" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.378891 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn9wm\" (UniqueName: \"kubernetes.io/projected/8a1ba6a1-6a82-47c4-9706-f77275f34d3a-kube-api-access-jn9wm\") pod \"auto-csr-approver-29535142-vfnhz\" (UID: \"8a1ba6a1-6a82-47c4-9706-f77275f34d3a\") " pod="openshift-infra/auto-csr-approver-29535142-vfnhz" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.468454 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535142-vfnhz" Feb 26 12:22:00 crc kubenswrapper[4699]: I0226 12:22:00.987987 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535142-vfnhz"] Feb 26 12:22:01 crc kubenswrapper[4699]: I0226 12:22:01.300044 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535142-vfnhz" event={"ID":"8a1ba6a1-6a82-47c4-9706-f77275f34d3a","Type":"ContainerStarted","Data":"33a0346b64cb96af9bc84b1f89b7928e64871fca412d10252ed5502ee0a2b2fa"} Feb 26 12:22:03 crc kubenswrapper[4699]: I0226 12:22:03.318603 4699 generic.go:334] "Generic (PLEG): container finished" podID="8a1ba6a1-6a82-47c4-9706-f77275f34d3a" containerID="99ba6bfc8510f503ebb43686b1e59641b632a364615001984ba3d20ee91c082d" exitCode=0 Feb 26 12:22:03 crc kubenswrapper[4699]: I0226 12:22:03.318703 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535142-vfnhz" event={"ID":"8a1ba6a1-6a82-47c4-9706-f77275f34d3a","Type":"ContainerDied","Data":"99ba6bfc8510f503ebb43686b1e59641b632a364615001984ba3d20ee91c082d"} Feb 26 12:22:04 crc kubenswrapper[4699]: I0226 12:22:04.688294 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535142-vfnhz" Feb 26 12:22:04 crc kubenswrapper[4699]: I0226 12:22:04.838744 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn9wm\" (UniqueName: \"kubernetes.io/projected/8a1ba6a1-6a82-47c4-9706-f77275f34d3a-kube-api-access-jn9wm\") pod \"8a1ba6a1-6a82-47c4-9706-f77275f34d3a\" (UID: \"8a1ba6a1-6a82-47c4-9706-f77275f34d3a\") " Feb 26 12:22:04 crc kubenswrapper[4699]: I0226 12:22:04.844509 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a1ba6a1-6a82-47c4-9706-f77275f34d3a-kube-api-access-jn9wm" (OuterVolumeSpecName: "kube-api-access-jn9wm") pod "8a1ba6a1-6a82-47c4-9706-f77275f34d3a" (UID: "8a1ba6a1-6a82-47c4-9706-f77275f34d3a"). InnerVolumeSpecName "kube-api-access-jn9wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:22:04 crc kubenswrapper[4699]: I0226 12:22:04.941116 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn9wm\" (UniqueName: \"kubernetes.io/projected/8a1ba6a1-6a82-47c4-9706-f77275f34d3a-kube-api-access-jn9wm\") on node \"crc\" DevicePath \"\"" Feb 26 12:22:05 crc kubenswrapper[4699]: I0226 12:22:05.339462 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535142-vfnhz" event={"ID":"8a1ba6a1-6a82-47c4-9706-f77275f34d3a","Type":"ContainerDied","Data":"33a0346b64cb96af9bc84b1f89b7928e64871fca412d10252ed5502ee0a2b2fa"} Feb 26 12:22:05 crc kubenswrapper[4699]: I0226 12:22:05.339502 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a0346b64cb96af9bc84b1f89b7928e64871fca412d10252ed5502ee0a2b2fa" Feb 26 12:22:05 crc kubenswrapper[4699]: I0226 12:22:05.339563 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535142-vfnhz" Feb 26 12:22:05 crc kubenswrapper[4699]: I0226 12:22:05.756078 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535136-zr5lr"] Feb 26 12:22:05 crc kubenswrapper[4699]: I0226 12:22:05.764901 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535136-zr5lr"] Feb 26 12:22:06 crc kubenswrapper[4699]: I0226 12:22:06.270924 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502aea63-b1be-4c9e-850b-bc5a2503b628" path="/var/lib/kubelet/pods/502aea63-b1be-4c9e-850b-bc5a2503b628/volumes" Feb 26 12:22:18 crc kubenswrapper[4699]: I0226 12:22:18.965252 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-bs5nk_6ef6a9d7-6997-485a-a812-ded9d3a2df85/kube-rbac-proxy/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.038456 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-bs5nk_6ef6a9d7-6997-485a-a812-ded9d3a2df85/controller/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.179922 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-svsrb_35357e2c-2a03-46f8-bc28-f7daad3b679d/frr-k8s-webhook-server/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.249561 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-frr-files/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.431478 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-frr-files/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.462935 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-reloader/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.462984 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-metrics/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.493170 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-reloader/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.613733 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-metrics/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.647511 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-metrics/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.665999 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-frr-files/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.672092 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-reloader/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.922610 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-frr-files/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.927340 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-metrics/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.929239 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/cp-reloader/0.log" Feb 26 12:22:19 crc kubenswrapper[4699]: I0226 12:22:19.930917 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/controller/0.log" Feb 26 12:22:20 crc kubenswrapper[4699]: I0226 12:22:20.115154 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/kube-rbac-proxy/0.log" Feb 26 12:22:20 crc kubenswrapper[4699]: I0226 12:22:20.138342 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/frr-metrics/0.log" Feb 26 12:22:20 crc kubenswrapper[4699]: I0226 12:22:20.196823 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/kube-rbac-proxy-frr/0.log" Feb 26 12:22:20 crc kubenswrapper[4699]: I0226 12:22:20.375686 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/reloader/0.log" Feb 26 12:22:20 crc kubenswrapper[4699]: I0226 12:22:20.430693 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5d58b8658b-qjr5b_cacc25f7-9d5d-4ba4-b7b7-f95d7cba63e8/manager/0.log" Feb 26 12:22:20 crc kubenswrapper[4699]: I0226 12:22:20.575434 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6d98597f89-glkjh_af2438c1-8812-4bb1-8999-66cb8d804c05/webhook-server/0.log" Feb 26 12:22:20 crc kubenswrapper[4699]: I0226 12:22:20.745498 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l8phj_d656ca89-f955-44bb-9944-f75bf485a254/kube-rbac-proxy/0.log" Feb 26 12:22:21 crc kubenswrapper[4699]: I0226 12:22:21.491223 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l8phj_d656ca89-f955-44bb-9944-f75bf485a254/speaker/0.log" Feb 26 12:22:21 crc kubenswrapper[4699]: I0226 12:22:21.810522 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wszs7_dfa29d17-a66a-42fe-8275-1526f8fb6dc9/frr/0.log" Feb 26 12:22:33 crc kubenswrapper[4699]: I0226 12:22:33.445163 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/util/0.log" Feb 26 12:22:33 crc kubenswrapper[4699]: I0226 12:22:33.657437 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/util/0.log" Feb 26 12:22:33 crc kubenswrapper[4699]: I0226 12:22:33.711915 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/pull/0.log" Feb 26 12:22:33 crc kubenswrapper[4699]: I0226 12:22:33.747691 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/pull/0.log" Feb 26 12:22:33 crc kubenswrapper[4699]: I0226 12:22:33.904660 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/pull/0.log" Feb 26 12:22:33 crc kubenswrapper[4699]: I0226 12:22:33.929537 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/extract/0.log" Feb 26 12:22:33 crc kubenswrapper[4699]: I0226 12:22:33.930315 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8274hh5_a0751c34-68ec-4fd1-821f-94e314dd5621/util/0.log" Feb 26 12:22:34 crc kubenswrapper[4699]: I0226 12:22:34.069477 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-utilities/0.log" Feb 26 12:22:34 crc kubenswrapper[4699]: I0226 12:22:34.229094 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-utilities/0.log" Feb 26 12:22:34 crc kubenswrapper[4699]: I0226 12:22:34.234567 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-content/0.log" Feb 26 12:22:34 crc kubenswrapper[4699]: I0226 12:22:34.248351 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-content/0.log" Feb 26 12:22:34 crc kubenswrapper[4699]: I0226 12:22:34.418197 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-content/0.log" Feb 26 12:22:34 crc kubenswrapper[4699]: I0226 12:22:34.436125 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/extract-utilities/0.log" Feb 26 12:22:34 crc kubenswrapper[4699]: I0226 12:22:34.584632 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-utilities/0.log" Feb 26 12:22:34 crc kubenswrapper[4699]: I0226 12:22:34.886013 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-utilities/0.log" Feb 26 12:22:34 crc kubenswrapper[4699]: I0226 12:22:34.926541 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-content/0.log" Feb 26 12:22:34 crc kubenswrapper[4699]: I0226 12:22:34.975348 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-content/0.log" Feb 26 12:22:34 crc kubenswrapper[4699]: I0226 12:22:34.981716 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vsj9_a23d2795-eec2-4e37-8902-7f9220e44cb1/registry-server/0.log" Feb 26 12:22:35 crc kubenswrapper[4699]: I0226 12:22:35.080924 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-utilities/0.log" Feb 26 12:22:35 crc kubenswrapper[4699]: I0226 12:22:35.160938 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/extract-content/0.log" Feb 26 12:22:35 crc kubenswrapper[4699]: I0226 12:22:35.274385 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/util/0.log" Feb 26 12:22:35 crc kubenswrapper[4699]: I0226 12:22:35.530824 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/pull/0.log" Feb 26 12:22:35 crc kubenswrapper[4699]: I0226 12:22:35.557680 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/pull/0.log" Feb 26 12:22:35 crc kubenswrapper[4699]: I0226 12:22:35.595644 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/util/0.log" Feb 26 12:22:35 crc kubenswrapper[4699]: I0226 12:22:35.788462 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/pull/0.log" Feb 26 12:22:35 crc kubenswrapper[4699]: I0226 12:22:35.795578 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/util/0.log" Feb 26 12:22:35 crc kubenswrapper[4699]: I0226 12:22:35.813785 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hfvdf_fde9effb-9fa9-46a0-a8e6-08080ed0b8ba/registry-server/0.log" Feb 26 12:22:35 crc kubenswrapper[4699]: I0226 12:22:35.840775 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4d7whb_2628fd13-0f89-4bb3-9b76-86a9331a303e/extract/0.log" Feb 26 12:22:35 crc kubenswrapper[4699]: I0226 12:22:35.971745 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nwbkq_43a980f6-1eff-4610-aa3e-69729c3eb7c7/marketplace-operator/0.log" Feb 26 12:22:36 crc kubenswrapper[4699]: I0226 12:22:36.069554 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-utilities/0.log" Feb 26 12:22:36 crc kubenswrapper[4699]: I0226 12:22:36.247835 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-utilities/0.log" Feb 26 12:22:36 crc kubenswrapper[4699]: I0226 12:22:36.252481 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-content/0.log" Feb 26 12:22:36 crc kubenswrapper[4699]: I0226 12:22:36.309031 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-content/0.log" Feb 26 12:22:36 crc kubenswrapper[4699]: I0226 12:22:36.479438 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-utilities/0.log" Feb 26 12:22:36 crc kubenswrapper[4699]: I0226 12:22:36.548953 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/extract-content/0.log" Feb 26 12:22:36 crc kubenswrapper[4699]: I0226 12:22:36.734337 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-utilities/0.log" Feb 26 12:22:36 crc kubenswrapper[4699]: I0226 12:22:36.734937 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r555d_d174508d-e5d5-4912-a652-e7b264f1c882/registry-server/0.log" Feb 26 12:22:36 crc kubenswrapper[4699]: I0226 12:22:36.913576 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-content/0.log" Feb 26 12:22:36 crc kubenswrapper[4699]: I0226 12:22:36.918731 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-utilities/0.log" Feb 26 12:22:36 crc kubenswrapper[4699]: I0226 12:22:36.921488 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-content/0.log" Feb 26 12:22:37 crc kubenswrapper[4699]: I0226 12:22:37.162181 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-utilities/0.log" Feb 26 12:22:37 crc kubenswrapper[4699]: I0226 12:22:37.168103 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/extract-content/0.log" Feb 26 12:22:37 crc kubenswrapper[4699]: I0226 12:22:37.680403 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wqmqz_a69df934-4fa7-472d-abe7-8fa4ec5d4296/registry-server/0.log" Feb 26 12:22:40 crc kubenswrapper[4699]: I0226 12:22:40.653004 4699 scope.go:117] "RemoveContainer" containerID="6e828c6eb232b14fedfc4161c27c5a5dd3b91bd1fe215ef080f8deb69fce1e31" Feb 26 12:22:41 crc kubenswrapper[4699]: I0226 12:22:41.585243 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:22:41 crc kubenswrapper[4699]: I0226 12:22:41.585648 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:23:11 crc kubenswrapper[4699]: I0226 12:23:11.585684 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:23:11 crc kubenswrapper[4699]: I0226 12:23:11.586285 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:23:41 crc kubenswrapper[4699]: I0226 12:23:41.584788 4699 patch_prober.go:28] interesting pod/machine-config-daemon-28p79 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:23:41 crc kubenswrapper[4699]: I0226 12:23:41.585367 4699 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:23:41 crc kubenswrapper[4699]: I0226 12:23:41.585421 4699 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-28p79" Feb 26 12:23:41 crc kubenswrapper[4699]: I0226 12:23:41.586305 4699 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30"} pod="openshift-machine-config-operator/machine-config-daemon-28p79" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:23:41 crc kubenswrapper[4699]: I0226 12:23:41.586365 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerName="machine-config-daemon" containerID="cri-o://db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" gracePeriod=600 Feb 26 12:23:41 crc kubenswrapper[4699]: E0226 12:23:41.736028 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:23:42 crc kubenswrapper[4699]: I0226 12:23:42.204668 4699 generic.go:334] "Generic (PLEG): container finished" podID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" exitCode=0 Feb 26 12:23:42 crc kubenswrapper[4699]: I0226 12:23:42.204740 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-28p79" event={"ID":"e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff","Type":"ContainerDied","Data":"db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30"} Feb 26 12:23:42 crc kubenswrapper[4699]: I0226 12:23:42.204789 4699 scope.go:117] "RemoveContainer" containerID="321e80bae8579e8007aa1cc495575fd7eef57d9379aadf703c862dea223958e5" Feb 26 12:23:42 crc kubenswrapper[4699]: I0226 12:23:42.206973 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:23:42 crc kubenswrapper[4699]: E0226 12:23:42.209575 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:23:55 crc kubenswrapper[4699]: I0226 12:23:55.261317 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:23:55 crc kubenswrapper[4699]: E0226 12:23:55.262161 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.147345 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535144-5h2px"] Feb 26 12:24:00 crc kubenswrapper[4699]: E0226 12:24:00.150030 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1ba6a1-6a82-47c4-9706-f77275f34d3a" containerName="oc" Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.150204 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1ba6a1-6a82-47c4-9706-f77275f34d3a" containerName="oc" Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.150820 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a1ba6a1-6a82-47c4-9706-f77275f34d3a" containerName="oc" Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.151919 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535144-5h2px" Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.156445 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.156714 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.157089 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.164966 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535144-5h2px"] Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.262295 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ndns\" (UniqueName: \"kubernetes.io/projected/1c4ce589-abf9-443e-8f50-2d1904d537ad-kube-api-access-8ndns\") pod \"auto-csr-approver-29535144-5h2px\" (UID: \"1c4ce589-abf9-443e-8f50-2d1904d537ad\") " pod="openshift-infra/auto-csr-approver-29535144-5h2px" Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.365341 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ndns\" (UniqueName: \"kubernetes.io/projected/1c4ce589-abf9-443e-8f50-2d1904d537ad-kube-api-access-8ndns\") pod \"auto-csr-approver-29535144-5h2px\" (UID: \"1c4ce589-abf9-443e-8f50-2d1904d537ad\") " pod="openshift-infra/auto-csr-approver-29535144-5h2px" Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.637583 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ndns\" (UniqueName: \"kubernetes.io/projected/1c4ce589-abf9-443e-8f50-2d1904d537ad-kube-api-access-8ndns\") pod \"auto-csr-approver-29535144-5h2px\" (UID: \"1c4ce589-abf9-443e-8f50-2d1904d537ad\") " pod="openshift-infra/auto-csr-approver-29535144-5h2px" Feb 26 12:24:00 crc kubenswrapper[4699]: I0226 12:24:00.773404 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535144-5h2px" Feb 26 12:24:01 crc kubenswrapper[4699]: I0226 12:24:01.233782 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535144-5h2px"] Feb 26 12:24:01 crc kubenswrapper[4699]: I0226 12:24:01.404164 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535144-5h2px" event={"ID":"1c4ce589-abf9-443e-8f50-2d1904d537ad","Type":"ContainerStarted","Data":"3ccbcce0173c0d2bec4896a426264bd199f80a8b3d7002d7d0419cc9a24823a7"} Feb 26 12:24:03 crc kubenswrapper[4699]: I0226 12:24:03.422299 4699 generic.go:334] "Generic (PLEG): container finished" podID="1c4ce589-abf9-443e-8f50-2d1904d537ad" containerID="78ce7123b16eb0d6213f96c0626817e0eb21374dea43ac7a0eeccd31bcc7f327" exitCode=0 Feb 26 12:24:03 crc kubenswrapper[4699]: I0226 12:24:03.422352 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535144-5h2px" event={"ID":"1c4ce589-abf9-443e-8f50-2d1904d537ad","Type":"ContainerDied","Data":"78ce7123b16eb0d6213f96c0626817e0eb21374dea43ac7a0eeccd31bcc7f327"} Feb 26 12:24:04 crc kubenswrapper[4699]: I0226 12:24:04.757485 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535144-5h2px" Feb 26 12:24:04 crc kubenswrapper[4699]: I0226 12:24:04.868306 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ndns\" (UniqueName: \"kubernetes.io/projected/1c4ce589-abf9-443e-8f50-2d1904d537ad-kube-api-access-8ndns\") pod \"1c4ce589-abf9-443e-8f50-2d1904d537ad\" (UID: \"1c4ce589-abf9-443e-8f50-2d1904d537ad\") " Feb 26 12:24:04 crc kubenswrapper[4699]: I0226 12:24:04.876628 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c4ce589-abf9-443e-8f50-2d1904d537ad-kube-api-access-8ndns" (OuterVolumeSpecName: "kube-api-access-8ndns") pod "1c4ce589-abf9-443e-8f50-2d1904d537ad" (UID: "1c4ce589-abf9-443e-8f50-2d1904d537ad"). InnerVolumeSpecName "kube-api-access-8ndns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:24:04 crc kubenswrapper[4699]: I0226 12:24:04.971092 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ndns\" (UniqueName: \"kubernetes.io/projected/1c4ce589-abf9-443e-8f50-2d1904d537ad-kube-api-access-8ndns\") on node \"crc\" DevicePath \"\"" Feb 26 12:24:05 crc kubenswrapper[4699]: I0226 12:24:05.443053 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535144-5h2px" event={"ID":"1c4ce589-abf9-443e-8f50-2d1904d537ad","Type":"ContainerDied","Data":"3ccbcce0173c0d2bec4896a426264bd199f80a8b3d7002d7d0419cc9a24823a7"} Feb 26 12:24:05 crc kubenswrapper[4699]: I0226 12:24:05.443347 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ccbcce0173c0d2bec4896a426264bd199f80a8b3d7002d7d0419cc9a24823a7" Feb 26 12:24:05 crc kubenswrapper[4699]: I0226 12:24:05.443130 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535144-5h2px" Feb 26 12:24:05 crc kubenswrapper[4699]: I0226 12:24:05.826052 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535138-ghxdv"] Feb 26 12:24:05 crc kubenswrapper[4699]: I0226 12:24:05.837142 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535138-ghxdv"] Feb 26 12:24:06 crc kubenswrapper[4699]: I0226 12:24:06.276514 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d73a20e-eea0-421b-8efd-6fd86f1e4d98" path="/var/lib/kubelet/pods/3d73a20e-eea0-421b-8efd-6fd86f1e4d98/volumes" Feb 26 12:24:10 crc kubenswrapper[4699]: I0226 12:24:10.260525 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:24:10 crc kubenswrapper[4699]: E0226 12:24:10.261070 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:24:25 crc kubenswrapper[4699]: I0226 12:24:25.260930 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:24:25 crc kubenswrapper[4699]: E0226 12:24:25.261864 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:24:28 crc kubenswrapper[4699]: I0226 12:24:28.705315 4699 generic.go:334] "Generic (PLEG): container finished" podID="e1a2e674-d3fd-4fac-b5e0-b201dd644f25" containerID="f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0" exitCode=0 Feb 26 12:24:28 crc kubenswrapper[4699]: I0226 12:24:28.705408 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l5g/must-gather-zwd9v" event={"ID":"e1a2e674-d3fd-4fac-b5e0-b201dd644f25","Type":"ContainerDied","Data":"f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0"} Feb 26 12:24:28 crc kubenswrapper[4699]: I0226 12:24:28.706234 4699 scope.go:117] "RemoveContainer" containerID="f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0" Feb 26 12:24:28 crc kubenswrapper[4699]: I0226 12:24:28.819647 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l2l5g_must-gather-zwd9v_e1a2e674-d3fd-4fac-b5e0-b201dd644f25/gather/0.log" Feb 26 12:24:37 crc kubenswrapper[4699]: I0226 12:24:37.261272 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:24:37 crc kubenswrapper[4699]: E0226 12:24:37.262361 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:24:39 crc kubenswrapper[4699]: I0226 12:24:39.853448 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l2l5g/must-gather-zwd9v"] Feb 26 12:24:39 crc kubenswrapper[4699]: I0226 12:24:39.854312 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-l2l5g/must-gather-zwd9v" podUID="e1a2e674-d3fd-4fac-b5e0-b201dd644f25" containerName="copy" containerID="cri-o://69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e" gracePeriod=2 Feb 26 12:24:39 crc kubenswrapper[4699]: I0226 12:24:39.868867 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l2l5g/must-gather-zwd9v"] Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.290742 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l2l5g_must-gather-zwd9v_e1a2e674-d3fd-4fac-b5e0-b201dd644f25/copy/0.log" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.291623 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/must-gather-zwd9v" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.421645 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-must-gather-output\") pod \"e1a2e674-d3fd-4fac-b5e0-b201dd644f25\" (UID: \"e1a2e674-d3fd-4fac-b5e0-b201dd644f25\") " Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.421715 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlmbm\" (UniqueName: \"kubernetes.io/projected/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-kube-api-access-zlmbm\") pod \"e1a2e674-d3fd-4fac-b5e0-b201dd644f25\" (UID: \"e1a2e674-d3fd-4fac-b5e0-b201dd644f25\") " Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.428719 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-kube-api-access-zlmbm" (OuterVolumeSpecName: "kube-api-access-zlmbm") pod "e1a2e674-d3fd-4fac-b5e0-b201dd644f25" (UID: "e1a2e674-d3fd-4fac-b5e0-b201dd644f25"). InnerVolumeSpecName "kube-api-access-zlmbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.523697 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlmbm\" (UniqueName: \"kubernetes.io/projected/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-kube-api-access-zlmbm\") on node \"crc\" DevicePath \"\"" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.610925 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e1a2e674-d3fd-4fac-b5e0-b201dd644f25" (UID: "e1a2e674-d3fd-4fac-b5e0-b201dd644f25"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.631148 4699 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1a2e674-d3fd-4fac-b5e0-b201dd644f25-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.833672 4699 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l2l5g_must-gather-zwd9v_e1a2e674-d3fd-4fac-b5e0-b201dd644f25/copy/0.log" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.834183 4699 generic.go:334] "Generic (PLEG): container finished" podID="e1a2e674-d3fd-4fac-b5e0-b201dd644f25" containerID="69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e" exitCode=143 Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.834250 4699 scope.go:117] "RemoveContainer" containerID="69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.834282 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l5g/must-gather-zwd9v" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.890162 4699 scope.go:117] "RemoveContainer" containerID="f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.974839 4699 scope.go:117] "RemoveContainer" containerID="69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e" Feb 26 12:24:40 crc kubenswrapper[4699]: E0226 12:24:40.975291 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e\": container with ID starting with 69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e not found: ID does not exist" containerID="69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.975332 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e"} err="failed to get container status \"69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e\": rpc error: code = NotFound desc = could not find container \"69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e\": container with ID starting with 69b0e2cefceef6db918052929a4b9d82fd8ed38656cef9eade78076401ce807e not found: ID does not exist" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.975360 4699 scope.go:117] "RemoveContainer" containerID="f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0" Feb 26 12:24:40 crc kubenswrapper[4699]: E0226 12:24:40.975583 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0\": container with ID starting with f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0 not found: ID does not exist" containerID="f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0" Feb 26 12:24:40 crc kubenswrapper[4699]: I0226 12:24:40.975612 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0"} err="failed to get container status \"f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0\": rpc error: code = NotFound desc = could not find container \"f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0\": container with ID starting with f14354991cbd7caa139684c2977ca7c2377002acb68ea2e984ee4fc98ce473f0 not found: ID does not exist" Feb 26 12:24:41 crc kubenswrapper[4699]: I0226 12:24:41.317644 4699 scope.go:117] "RemoveContainer" containerID="206617da387e97d81b9b831e8d26536a56cede7f0a2daac8fe00d38d64e627ce" Feb 26 12:24:42 crc kubenswrapper[4699]: I0226 12:24:42.278645 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a2e674-d3fd-4fac-b5e0-b201dd644f25" path="/var/lib/kubelet/pods/e1a2e674-d3fd-4fac-b5e0-b201dd644f25/volumes" Feb 26 12:24:48 crc kubenswrapper[4699]: I0226 12:24:48.261588 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:24:48 crc kubenswrapper[4699]: E0226 12:24:48.262688 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:25:00 crc kubenswrapper[4699]: I0226 12:25:00.262994 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:25:00 crc kubenswrapper[4699]: E0226 12:25:00.263828 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:25:13 crc kubenswrapper[4699]: I0226 12:25:13.261140 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:25:13 crc kubenswrapper[4699]: E0226 12:25:13.261900 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:25:28 crc kubenswrapper[4699]: I0226 12:25:28.261236 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:25:28 crc kubenswrapper[4699]: E0226 12:25:28.262094 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:25:43 crc kubenswrapper[4699]: I0226 12:25:43.260208 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:25:43 crc kubenswrapper[4699]: E0226 12:25:43.261005 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.502654 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q6n44"] Feb 26 12:25:45 crc kubenswrapper[4699]: E0226 12:25:45.503517 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4ce589-abf9-443e-8f50-2d1904d537ad" containerName="oc" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.503536 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4ce589-abf9-443e-8f50-2d1904d537ad" containerName="oc" Feb 26 12:25:45 crc kubenswrapper[4699]: E0226 12:25:45.503567 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a2e674-d3fd-4fac-b5e0-b201dd644f25" containerName="gather" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.503576 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a2e674-d3fd-4fac-b5e0-b201dd644f25" containerName="gather" Feb 26 12:25:45 crc kubenswrapper[4699]: E0226 12:25:45.503596 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a2e674-d3fd-4fac-b5e0-b201dd644f25" containerName="copy" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.503603 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a2e674-d3fd-4fac-b5e0-b201dd644f25" containerName="copy" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.505371 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a2e674-d3fd-4fac-b5e0-b201dd644f25" containerName="gather" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.505400 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c4ce589-abf9-443e-8f50-2d1904d537ad" containerName="oc" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.505423 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a2e674-d3fd-4fac-b5e0-b201dd644f25" containerName="copy" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.507206 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.517400 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6n44"] Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.610746 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-utilities\") pod \"community-operators-q6n44\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.610828 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnsp6\" (UniqueName: \"kubernetes.io/projected/f891d809-e0a8-4802-a2a3-2fd5d0d45607-kube-api-access-pnsp6\") pod \"community-operators-q6n44\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.610987 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-catalog-content\") pod \"community-operators-q6n44\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.713137 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-catalog-content\") pod \"community-operators-q6n44\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.713283 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-utilities\") pod \"community-operators-q6n44\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.713321 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnsp6\" (UniqueName: \"kubernetes.io/projected/f891d809-e0a8-4802-a2a3-2fd5d0d45607-kube-api-access-pnsp6\") pod \"community-operators-q6n44\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.713865 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-catalog-content\") pod \"community-operators-q6n44\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.713888 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-utilities\") pod \"community-operators-q6n44\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.735156 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnsp6\" (UniqueName: \"kubernetes.io/projected/f891d809-e0a8-4802-a2a3-2fd5d0d45607-kube-api-access-pnsp6\") pod \"community-operators-q6n44\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:45 crc kubenswrapper[4699]: I0226 12:25:45.837215 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:46 crc kubenswrapper[4699]: I0226 12:25:46.137101 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6n44"] Feb 26 12:25:46 crc kubenswrapper[4699]: I0226 12:25:46.498474 4699 generic.go:334] "Generic (PLEG): container finished" podID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" containerID="f63a74735857407b0a5b43b39056c8d70c60ab1d68d78bf2372e9fa58517adae" exitCode=0 Feb 26 12:25:46 crc kubenswrapper[4699]: I0226 12:25:46.498909 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6n44" event={"ID":"f891d809-e0a8-4802-a2a3-2fd5d0d45607","Type":"ContainerDied","Data":"f63a74735857407b0a5b43b39056c8d70c60ab1d68d78bf2372e9fa58517adae"} Feb 26 12:25:46 crc kubenswrapper[4699]: I0226 12:25:46.498968 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6n44" event={"ID":"f891d809-e0a8-4802-a2a3-2fd5d0d45607","Type":"ContainerStarted","Data":"f86903ef9072f064c8fb46b2178effaa7337edbc72bcdb85b9669719d99d0bbb"} Feb 26 12:25:46 crc kubenswrapper[4699]: I0226 12:25:46.500993 4699 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:25:49 crc kubenswrapper[4699]: I0226 12:25:49.537187 4699 generic.go:334] "Generic (PLEG): container finished" podID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" containerID="61ef761611c76e0cf0549c286fd56950d21e43fe0a1e1a9112ef04ae2af064a2" exitCode=0 Feb 26 12:25:49 crc kubenswrapper[4699]: I0226 12:25:49.537225 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6n44" event={"ID":"f891d809-e0a8-4802-a2a3-2fd5d0d45607","Type":"ContainerDied","Data":"61ef761611c76e0cf0549c286fd56950d21e43fe0a1e1a9112ef04ae2af064a2"} Feb 26 12:25:50 crc kubenswrapper[4699]: I0226 12:25:50.548262 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6n44" event={"ID":"f891d809-e0a8-4802-a2a3-2fd5d0d45607","Type":"ContainerStarted","Data":"2fa874b7398241542e9ad132af0702e21777587a267cb55b37517ce2148cb215"} Feb 26 12:25:50 crc kubenswrapper[4699]: I0226 12:25:50.572610 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q6n44" podStartSLOduration=2.065656132 podStartE2EDuration="5.572548488s" podCreationTimestamp="2026-02-26 12:25:45 +0000 UTC" firstStartedPulling="2026-02-26 12:25:46.500675526 +0000 UTC m=+4492.311501960" lastFinishedPulling="2026-02-26 12:25:50.007567882 +0000 UTC m=+4495.818394316" observedRunningTime="2026-02-26 12:25:50.569719896 +0000 UTC m=+4496.380546340" watchObservedRunningTime="2026-02-26 12:25:50.572548488 +0000 UTC m=+4496.383374932" Feb 26 12:25:55 crc kubenswrapper[4699]: I0226 12:25:55.837853 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:55 crc kubenswrapper[4699]: I0226 12:25:55.838450 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:55 crc kubenswrapper[4699]: I0226 12:25:55.892751 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:56 crc kubenswrapper[4699]: I0226 12:25:56.267420 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:25:56 crc kubenswrapper[4699]: E0226 12:25:56.267814 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:25:56 crc kubenswrapper[4699]: I0226 12:25:56.648375 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:56 crc kubenswrapper[4699]: I0226 12:25:56.696815 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6n44"] Feb 26 12:25:58 crc kubenswrapper[4699]: I0226 12:25:58.623612 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q6n44" podUID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" containerName="registry-server" containerID="cri-o://2fa874b7398241542e9ad132af0702e21777587a267cb55b37517ce2148cb215" gracePeriod=2 Feb 26 12:25:59 crc kubenswrapper[4699]: I0226 12:25:59.643422 4699 generic.go:334] "Generic (PLEG): container finished" podID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" containerID="2fa874b7398241542e9ad132af0702e21777587a267cb55b37517ce2148cb215" exitCode=0 Feb 26 12:25:59 crc kubenswrapper[4699]: I0226 12:25:59.643566 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6n44" event={"ID":"f891d809-e0a8-4802-a2a3-2fd5d0d45607","Type":"ContainerDied","Data":"2fa874b7398241542e9ad132af0702e21777587a267cb55b37517ce2148cb215"} Feb 26 12:25:59 crc kubenswrapper[4699]: I0226 12:25:59.770904 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:25:59 crc kubenswrapper[4699]: I0226 12:25:59.900030 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-catalog-content\") pod \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " Feb 26 12:25:59 crc kubenswrapper[4699]: I0226 12:25:59.900092 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnsp6\" (UniqueName: \"kubernetes.io/projected/f891d809-e0a8-4802-a2a3-2fd5d0d45607-kube-api-access-pnsp6\") pod \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " Feb 26 12:25:59 crc kubenswrapper[4699]: I0226 12:25:59.900218 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-utilities\") pod \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\" (UID: \"f891d809-e0a8-4802-a2a3-2fd5d0d45607\") " Feb 26 12:25:59 crc kubenswrapper[4699]: I0226 12:25:59.901357 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-utilities" (OuterVolumeSpecName: "utilities") pod "f891d809-e0a8-4802-a2a3-2fd5d0d45607" (UID: "f891d809-e0a8-4802-a2a3-2fd5d0d45607"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:25:59 crc kubenswrapper[4699]: I0226 12:25:59.905387 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f891d809-e0a8-4802-a2a3-2fd5d0d45607-kube-api-access-pnsp6" (OuterVolumeSpecName: "kube-api-access-pnsp6") pod "f891d809-e0a8-4802-a2a3-2fd5d0d45607" (UID: "f891d809-e0a8-4802-a2a3-2fd5d0d45607"). InnerVolumeSpecName "kube-api-access-pnsp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.002924 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnsp6\" (UniqueName: \"kubernetes.io/projected/f891d809-e0a8-4802-a2a3-2fd5d0d45607-kube-api-access-pnsp6\") on node \"crc\" DevicePath \"\"" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.002963 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.151607 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535146-glcgw"] Feb 26 12:26:00 crc kubenswrapper[4699]: E0226 12:26:00.152385 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" containerName="extract-utilities" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.152403 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" containerName="extract-utilities" Feb 26 12:26:00 crc kubenswrapper[4699]: E0226 12:26:00.152420 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" containerName="extract-content" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.152427 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" containerName="extract-content" Feb 26 12:26:00 crc kubenswrapper[4699]: E0226 12:26:00.152458 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" containerName="registry-server" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.152463 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" containerName="registry-server" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.152656 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" containerName="registry-server" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.153247 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535146-glcgw" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.155310 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.156208 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.159407 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.182173 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535146-glcgw"] Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.208663 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrxqm\" (UniqueName: \"kubernetes.io/projected/654da7d8-e431-4d84-97bb-81179a5c382f-kube-api-access-mrxqm\") pod \"auto-csr-approver-29535146-glcgw\" (UID: \"654da7d8-e431-4d84-97bb-81179a5c382f\") " pod="openshift-infra/auto-csr-approver-29535146-glcgw" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.311185 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrxqm\" (UniqueName: \"kubernetes.io/projected/654da7d8-e431-4d84-97bb-81179a5c382f-kube-api-access-mrxqm\") pod \"auto-csr-approver-29535146-glcgw\" (UID: \"654da7d8-e431-4d84-97bb-81179a5c382f\") " pod="openshift-infra/auto-csr-approver-29535146-glcgw" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.322962 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f891d809-e0a8-4802-a2a3-2fd5d0d45607" (UID: "f891d809-e0a8-4802-a2a3-2fd5d0d45607"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.334044 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrxqm\" (UniqueName: \"kubernetes.io/projected/654da7d8-e431-4d84-97bb-81179a5c382f-kube-api-access-mrxqm\") pod \"auto-csr-approver-29535146-glcgw\" (UID: \"654da7d8-e431-4d84-97bb-81179a5c382f\") " pod="openshift-infra/auto-csr-approver-29535146-glcgw" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.412965 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f891d809-e0a8-4802-a2a3-2fd5d0d45607-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.483051 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535146-glcgw" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.656979 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6n44" event={"ID":"f891d809-e0a8-4802-a2a3-2fd5d0d45607","Type":"ContainerDied","Data":"f86903ef9072f064c8fb46b2178effaa7337edbc72bcdb85b9669719d99d0bbb"} Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.657317 4699 scope.go:117] "RemoveContainer" containerID="2fa874b7398241542e9ad132af0702e21777587a267cb55b37517ce2148cb215" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.657061 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6n44" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.690319 4699 scope.go:117] "RemoveContainer" containerID="61ef761611c76e0cf0549c286fd56950d21e43fe0a1e1a9112ef04ae2af064a2" Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.730560 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6n44"] Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.745402 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q6n44"] Feb 26 12:26:00 crc kubenswrapper[4699]: I0226 12:26:00.946619 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535146-glcgw"] Feb 26 12:26:01 crc kubenswrapper[4699]: I0226 12:26:01.238017 4699 scope.go:117] "RemoveContainer" containerID="f63a74735857407b0a5b43b39056c8d70c60ab1d68d78bf2372e9fa58517adae" Feb 26 12:26:01 crc kubenswrapper[4699]: I0226 12:26:01.668512 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535146-glcgw" event={"ID":"654da7d8-e431-4d84-97bb-81179a5c382f","Type":"ContainerStarted","Data":"7494a362efef771ab7f6ae41a997f1736c91f2472d7fbc4bddad128eb8ce5e9a"} Feb 26 12:26:02 crc kubenswrapper[4699]: I0226 12:26:02.290266 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f891d809-e0a8-4802-a2a3-2fd5d0d45607" path="/var/lib/kubelet/pods/f891d809-e0a8-4802-a2a3-2fd5d0d45607/volumes" Feb 26 12:26:03 crc kubenswrapper[4699]: I0226 12:26:03.687832 4699 generic.go:334] "Generic (PLEG): container finished" podID="654da7d8-e431-4d84-97bb-81179a5c382f" containerID="ad9766b89198b6923833e66b567c7898f5dfe70a013994bcfa98a01accc75132" exitCode=0 Feb 26 12:26:03 crc kubenswrapper[4699]: I0226 12:26:03.687881 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535146-glcgw" event={"ID":"654da7d8-e431-4d84-97bb-81179a5c382f","Type":"ContainerDied","Data":"ad9766b89198b6923833e66b567c7898f5dfe70a013994bcfa98a01accc75132"} Feb 26 12:26:05 crc kubenswrapper[4699]: I0226 12:26:05.023840 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535146-glcgw" Feb 26 12:26:05 crc kubenswrapper[4699]: I0226 12:26:05.111341 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrxqm\" (UniqueName: \"kubernetes.io/projected/654da7d8-e431-4d84-97bb-81179a5c382f-kube-api-access-mrxqm\") pod \"654da7d8-e431-4d84-97bb-81179a5c382f\" (UID: \"654da7d8-e431-4d84-97bb-81179a5c382f\") " Feb 26 12:26:05 crc kubenswrapper[4699]: I0226 12:26:05.129157 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654da7d8-e431-4d84-97bb-81179a5c382f-kube-api-access-mrxqm" (OuterVolumeSpecName: "kube-api-access-mrxqm") pod "654da7d8-e431-4d84-97bb-81179a5c382f" (UID: "654da7d8-e431-4d84-97bb-81179a5c382f"). InnerVolumeSpecName "kube-api-access-mrxqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:26:05 crc kubenswrapper[4699]: I0226 12:26:05.213946 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrxqm\" (UniqueName: \"kubernetes.io/projected/654da7d8-e431-4d84-97bb-81179a5c382f-kube-api-access-mrxqm\") on node \"crc\" DevicePath \"\"" Feb 26 12:26:05 crc kubenswrapper[4699]: I0226 12:26:05.707873 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535146-glcgw" event={"ID":"654da7d8-e431-4d84-97bb-81179a5c382f","Type":"ContainerDied","Data":"7494a362efef771ab7f6ae41a997f1736c91f2472d7fbc4bddad128eb8ce5e9a"} Feb 26 12:26:05 crc kubenswrapper[4699]: I0226 12:26:05.707925 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7494a362efef771ab7f6ae41a997f1736c91f2472d7fbc4bddad128eb8ce5e9a" Feb 26 12:26:05 crc kubenswrapper[4699]: I0226 12:26:05.707946 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535146-glcgw" Feb 26 12:26:06 crc kubenswrapper[4699]: I0226 12:26:06.102666 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535140-wg97p"] Feb 26 12:26:06 crc kubenswrapper[4699]: I0226 12:26:06.116280 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535140-wg97p"] Feb 26 12:26:06 crc kubenswrapper[4699]: I0226 12:26:06.272520 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="924cba42-fd14-4d50-815d-0d8fa83c6b06" path="/var/lib/kubelet/pods/924cba42-fd14-4d50-815d-0d8fa83c6b06/volumes" Feb 26 12:26:09 crc kubenswrapper[4699]: I0226 12:26:09.260964 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:26:09 crc kubenswrapper[4699]: E0226 12:26:09.262050 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:26:23 crc kubenswrapper[4699]: I0226 12:26:23.261284 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:26:23 crc kubenswrapper[4699]: E0226 12:26:23.263209 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:26:37 crc kubenswrapper[4699]: I0226 12:26:37.260898 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:26:37 crc kubenswrapper[4699]: E0226 12:26:37.261602 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:26:41 crc kubenswrapper[4699]: I0226 12:26:41.408622 4699 scope.go:117] "RemoveContainer" containerID="74ea3c51dc439314ff3bb87ede5fd5f905e28e2682d357fd7d7822dde4facddf" Feb 26 12:26:52 crc kubenswrapper[4699]: I0226 12:26:52.261536 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:26:52 crc kubenswrapper[4699]: E0226 12:26:52.262521 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:27:03 crc kubenswrapper[4699]: I0226 12:27:03.260878 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:27:03 crc kubenswrapper[4699]: E0226 12:27:03.261719 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:27:18 crc kubenswrapper[4699]: I0226 12:27:18.260627 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:27:18 crc kubenswrapper[4699]: E0226 12:27:18.261524 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.163705 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6t9z2"] Feb 26 12:27:33 crc kubenswrapper[4699]: E0226 12:27:33.168478 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654da7d8-e431-4d84-97bb-81179a5c382f" containerName="oc" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.168499 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="654da7d8-e431-4d84-97bb-81179a5c382f" containerName="oc" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.168703 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="654da7d8-e431-4d84-97bb-81179a5c382f" containerName="oc" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.170169 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.179643 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t9z2"] Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.260423 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:27:33 crc kubenswrapper[4699]: E0226 12:27:33.260766 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.361457 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82dx9\" (UniqueName: \"kubernetes.io/projected/eaaa487f-21d2-470a-9bce-914a42da0710-kube-api-access-82dx9\") pod \"redhat-marketplace-6t9z2\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.361980 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-catalog-content\") pod \"redhat-marketplace-6t9z2\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.362602 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-utilities\") pod \"redhat-marketplace-6t9z2\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.464243 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82dx9\" (UniqueName: \"kubernetes.io/projected/eaaa487f-21d2-470a-9bce-914a42da0710-kube-api-access-82dx9\") pod \"redhat-marketplace-6t9z2\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.464314 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-catalog-content\") pod \"redhat-marketplace-6t9z2\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.464446 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-utilities\") pod \"redhat-marketplace-6t9z2\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.464923 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-catalog-content\") pod \"redhat-marketplace-6t9z2\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.465138 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-utilities\") pod \"redhat-marketplace-6t9z2\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.484712 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82dx9\" (UniqueName: \"kubernetes.io/projected/eaaa487f-21d2-470a-9bce-914a42da0710-kube-api-access-82dx9\") pod \"redhat-marketplace-6t9z2\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.487478 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:33 crc kubenswrapper[4699]: I0226 12:27:33.953630 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t9z2"] Feb 26 12:27:34 crc kubenswrapper[4699]: I0226 12:27:34.495400 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t9z2" event={"ID":"eaaa487f-21d2-470a-9bce-914a42da0710","Type":"ContainerStarted","Data":"482f4c95523e33a2ef21be55b9a8f8be2b13ce328c104dfd3b67ee35efb3e958"} Feb 26 12:27:35 crc kubenswrapper[4699]: I0226 12:27:35.507150 4699 generic.go:334] "Generic (PLEG): container finished" podID="eaaa487f-21d2-470a-9bce-914a42da0710" containerID="fc69724a90e386282f69cffc1a4653e7d87b6517a937f65097c7cf5f0f76da29" exitCode=0 Feb 26 12:27:35 crc kubenswrapper[4699]: I0226 12:27:35.507329 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t9z2" event={"ID":"eaaa487f-21d2-470a-9bce-914a42da0710","Type":"ContainerDied","Data":"fc69724a90e386282f69cffc1a4653e7d87b6517a937f65097c7cf5f0f76da29"} Feb 26 12:27:37 crc kubenswrapper[4699]: I0226 12:27:37.529549 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t9z2" event={"ID":"eaaa487f-21d2-470a-9bce-914a42da0710","Type":"ContainerStarted","Data":"28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0"} Feb 26 12:27:38 crc kubenswrapper[4699]: I0226 12:27:38.541801 4699 generic.go:334] "Generic (PLEG): container finished" podID="eaaa487f-21d2-470a-9bce-914a42da0710" containerID="28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0" exitCode=0 Feb 26 12:27:38 crc kubenswrapper[4699]: I0226 12:27:38.541897 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t9z2" event={"ID":"eaaa487f-21d2-470a-9bce-914a42da0710","Type":"ContainerDied","Data":"28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0"} Feb 26 12:27:39 crc kubenswrapper[4699]: I0226 12:27:39.551082 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t9z2" event={"ID":"eaaa487f-21d2-470a-9bce-914a42da0710","Type":"ContainerStarted","Data":"e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c"} Feb 26 12:27:39 crc kubenswrapper[4699]: I0226 12:27:39.576293 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6t9z2" podStartSLOduration=3.027688096 podStartE2EDuration="6.57627032s" podCreationTimestamp="2026-02-26 12:27:33 +0000 UTC" firstStartedPulling="2026-02-26 12:27:35.51129264 +0000 UTC m=+4601.322119074" lastFinishedPulling="2026-02-26 12:27:39.059874874 +0000 UTC m=+4604.870701298" observedRunningTime="2026-02-26 12:27:39.566286161 +0000 UTC m=+4605.377112615" watchObservedRunningTime="2026-02-26 12:27:39.57627032 +0000 UTC m=+4605.387096754" Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.557184 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jt4jp"] Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.560243 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.587052 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jt4jp"] Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.623276 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-utilities\") pod \"certified-operators-jt4jp\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.623570 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-catalog-content\") pod \"certified-operators-jt4jp\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.623825 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghqg8\" (UniqueName: \"kubernetes.io/projected/02d3bec2-0500-4cf0-bd86-afb60afff196-kube-api-access-ghqg8\") pod \"certified-operators-jt4jp\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.725701 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghqg8\" (UniqueName: \"kubernetes.io/projected/02d3bec2-0500-4cf0-bd86-afb60afff196-kube-api-access-ghqg8\") pod \"certified-operators-jt4jp\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.725777 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-utilities\") pod \"certified-operators-jt4jp\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.725809 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-catalog-content\") pod \"certified-operators-jt4jp\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.726386 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-catalog-content\") pod \"certified-operators-jt4jp\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.726904 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-utilities\") pod \"certified-operators-jt4jp\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.749798 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghqg8\" (UniqueName: \"kubernetes.io/projected/02d3bec2-0500-4cf0-bd86-afb60afff196-kube-api-access-ghqg8\") pod \"certified-operators-jt4jp\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:41 crc kubenswrapper[4699]: I0226 12:27:41.885735 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:42 crc kubenswrapper[4699]: I0226 12:27:42.360449 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jt4jp"] Feb 26 12:27:42 crc kubenswrapper[4699]: I0226 12:27:42.576404 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt4jp" event={"ID":"02d3bec2-0500-4cf0-bd86-afb60afff196","Type":"ContainerStarted","Data":"02a24e01b1c182ae9a9efe8461a684df616383c8ce03e663ec76416efe2f36ea"} Feb 26 12:27:43 crc kubenswrapper[4699]: I0226 12:27:43.487817 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:43 crc kubenswrapper[4699]: I0226 12:27:43.489392 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:43 crc kubenswrapper[4699]: I0226 12:27:43.548211 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:43 crc kubenswrapper[4699]: I0226 12:27:43.586994 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt4jp" event={"ID":"02d3bec2-0500-4cf0-bd86-afb60afff196","Type":"ContainerDied","Data":"b4022229d156fc5677e43d0ae8674a6a03ab663afab5557ed38024993deb55f7"} Feb 26 12:27:43 crc kubenswrapper[4699]: I0226 12:27:43.586930 4699 generic.go:334] "Generic (PLEG): container finished" podID="02d3bec2-0500-4cf0-bd86-afb60afff196" containerID="b4022229d156fc5677e43d0ae8674a6a03ab663afab5557ed38024993deb55f7" exitCode=0 Feb 26 12:27:44 crc kubenswrapper[4699]: I0226 12:27:44.647474 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:46 crc kubenswrapper[4699]: I0226 12:27:46.543933 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t9z2"] Feb 26 12:27:46 crc kubenswrapper[4699]: I0226 12:27:46.615185 4699 generic.go:334] "Generic (PLEG): container finished" podID="02d3bec2-0500-4cf0-bd86-afb60afff196" containerID="596ca82299eb4b3c2c0c7cdfb8e6e760467fefb1fafd10f080a0a95e6093e4d7" exitCode=0 Feb 26 12:27:46 crc kubenswrapper[4699]: I0226 12:27:46.615390 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6t9z2" podUID="eaaa487f-21d2-470a-9bce-914a42da0710" containerName="registry-server" containerID="cri-o://e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c" gracePeriod=2 Feb 26 12:27:46 crc kubenswrapper[4699]: I0226 12:27:46.615377 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt4jp" event={"ID":"02d3bec2-0500-4cf0-bd86-afb60afff196","Type":"ContainerDied","Data":"596ca82299eb4b3c2c0c7cdfb8e6e760467fefb1fafd10f080a0a95e6093e4d7"} Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.150593 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.251459 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82dx9\" (UniqueName: \"kubernetes.io/projected/eaaa487f-21d2-470a-9bce-914a42da0710-kube-api-access-82dx9\") pod \"eaaa487f-21d2-470a-9bce-914a42da0710\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.253204 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-utilities\") pod \"eaaa487f-21d2-470a-9bce-914a42da0710\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.253242 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-catalog-content\") pod \"eaaa487f-21d2-470a-9bce-914a42da0710\" (UID: \"eaaa487f-21d2-470a-9bce-914a42da0710\") " Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.254776 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-utilities" (OuterVolumeSpecName: "utilities") pod "eaaa487f-21d2-470a-9bce-914a42da0710" (UID: "eaaa487f-21d2-470a-9bce-914a42da0710"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.291904 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eaaa487f-21d2-470a-9bce-914a42da0710" (UID: "eaaa487f-21d2-470a-9bce-914a42da0710"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.356709 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.356978 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaaa487f-21d2-470a-9bce-914a42da0710-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.626067 4699 generic.go:334] "Generic (PLEG): container finished" podID="eaaa487f-21d2-470a-9bce-914a42da0710" containerID="e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c" exitCode=0 Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.626186 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t9z2" event={"ID":"eaaa487f-21d2-470a-9bce-914a42da0710","Type":"ContainerDied","Data":"e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c"} Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.626228 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t9z2" event={"ID":"eaaa487f-21d2-470a-9bce-914a42da0710","Type":"ContainerDied","Data":"482f4c95523e33a2ef21be55b9a8f8be2b13ce328c104dfd3b67ee35efb3e958"} Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.626253 4699 scope.go:117] "RemoveContainer" containerID="e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.627354 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6t9z2" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.629220 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt4jp" event={"ID":"02d3bec2-0500-4cf0-bd86-afb60afff196","Type":"ContainerStarted","Data":"ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521"} Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.646016 4699 scope.go:117] "RemoveContainer" containerID="28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.658447 4699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jt4jp" podStartSLOduration=3.266455104 podStartE2EDuration="6.658425395s" podCreationTimestamp="2026-02-26 12:27:41 +0000 UTC" firstStartedPulling="2026-02-26 12:27:43.589416978 +0000 UTC m=+4609.400243402" lastFinishedPulling="2026-02-26 12:27:46.981387259 +0000 UTC m=+4612.792213693" observedRunningTime="2026-02-26 12:27:47.65241493 +0000 UTC m=+4613.463241374" watchObservedRunningTime="2026-02-26 12:27:47.658425395 +0000 UTC m=+4613.469251829" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.738778 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaaa487f-21d2-470a-9bce-914a42da0710-kube-api-access-82dx9" (OuterVolumeSpecName: "kube-api-access-82dx9") pod "eaaa487f-21d2-470a-9bce-914a42da0710" (UID: "eaaa487f-21d2-470a-9bce-914a42da0710"). InnerVolumeSpecName "kube-api-access-82dx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.741891 4699 scope.go:117] "RemoveContainer" containerID="fc69724a90e386282f69cffc1a4653e7d87b6517a937f65097c7cf5f0f76da29" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.766327 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82dx9\" (UniqueName: \"kubernetes.io/projected/eaaa487f-21d2-470a-9bce-914a42da0710-kube-api-access-82dx9\") on node \"crc\" DevicePath \"\"" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.832383 4699 scope.go:117] "RemoveContainer" containerID="e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c" Feb 26 12:27:47 crc kubenswrapper[4699]: E0226 12:27:47.832983 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c\": container with ID starting with e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c not found: ID does not exist" containerID="e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.833015 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c"} err="failed to get container status \"e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c\": rpc error: code = NotFound desc = could not find container \"e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c\": container with ID starting with e77c07126944e5ff02c11f60634c728650d676e0f4404cb5b18f9299b090c39c not found: ID does not exist" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.833036 4699 scope.go:117] "RemoveContainer" containerID="28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0" Feb 26 12:27:47 crc kubenswrapper[4699]: E0226 12:27:47.833404 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0\": container with ID starting with 28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0 not found: ID does not exist" containerID="28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.833425 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0"} err="failed to get container status \"28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0\": rpc error: code = NotFound desc = could not find container \"28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0\": container with ID starting with 28c31cfddd5850fe577c6d8f161045b7c089c1ee5bd2e321948f303c98410cf0 not found: ID does not exist" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.833440 4699 scope.go:117] "RemoveContainer" containerID="fc69724a90e386282f69cffc1a4653e7d87b6517a937f65097c7cf5f0f76da29" Feb 26 12:27:47 crc kubenswrapper[4699]: E0226 12:27:47.833765 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc69724a90e386282f69cffc1a4653e7d87b6517a937f65097c7cf5f0f76da29\": container with ID starting with fc69724a90e386282f69cffc1a4653e7d87b6517a937f65097c7cf5f0f76da29 not found: ID does not exist" containerID="fc69724a90e386282f69cffc1a4653e7d87b6517a937f65097c7cf5f0f76da29" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.833786 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc69724a90e386282f69cffc1a4653e7d87b6517a937f65097c7cf5f0f76da29"} err="failed to get container status \"fc69724a90e386282f69cffc1a4653e7d87b6517a937f65097c7cf5f0f76da29\": rpc error: code = NotFound desc = could not find container \"fc69724a90e386282f69cffc1a4653e7d87b6517a937f65097c7cf5f0f76da29\": container with ID starting with fc69724a90e386282f69cffc1a4653e7d87b6517a937f65097c7cf5f0f76da29 not found: ID does not exist" Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.972976 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t9z2"] Feb 26 12:27:47 crc kubenswrapper[4699]: I0226 12:27:47.982035 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t9z2"] Feb 26 12:27:48 crc kubenswrapper[4699]: I0226 12:27:48.261221 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:27:48 crc kubenswrapper[4699]: E0226 12:27:48.261541 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:27:48 crc kubenswrapper[4699]: I0226 12:27:48.274872 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaaa487f-21d2-470a-9bce-914a42da0710" path="/var/lib/kubelet/pods/eaaa487f-21d2-470a-9bce-914a42da0710/volumes" Feb 26 12:27:51 crc kubenswrapper[4699]: I0226 12:27:51.886874 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:51 crc kubenswrapper[4699]: I0226 12:27:51.887531 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:51 crc kubenswrapper[4699]: I0226 12:27:51.939050 4699 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:52 crc kubenswrapper[4699]: I0226 12:27:52.728186 4699 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:53 crc kubenswrapper[4699]: I0226 12:27:53.341228 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jt4jp"] Feb 26 12:27:54 crc kubenswrapper[4699]: I0226 12:27:54.700576 4699 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jt4jp" podUID="02d3bec2-0500-4cf0-bd86-afb60afff196" containerName="registry-server" containerID="cri-o://ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521" gracePeriod=2 Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.224456 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.336756 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-catalog-content\") pod \"02d3bec2-0500-4cf0-bd86-afb60afff196\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.336879 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-utilities\") pod \"02d3bec2-0500-4cf0-bd86-afb60afff196\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.336989 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghqg8\" (UniqueName: \"kubernetes.io/projected/02d3bec2-0500-4cf0-bd86-afb60afff196-kube-api-access-ghqg8\") pod \"02d3bec2-0500-4cf0-bd86-afb60afff196\" (UID: \"02d3bec2-0500-4cf0-bd86-afb60afff196\") " Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.338653 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-utilities" (OuterVolumeSpecName: "utilities") pod "02d3bec2-0500-4cf0-bd86-afb60afff196" (UID: "02d3bec2-0500-4cf0-bd86-afb60afff196"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.345161 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d3bec2-0500-4cf0-bd86-afb60afff196-kube-api-access-ghqg8" (OuterVolumeSpecName: "kube-api-access-ghqg8") pod "02d3bec2-0500-4cf0-bd86-afb60afff196" (UID: "02d3bec2-0500-4cf0-bd86-afb60afff196"). InnerVolumeSpecName "kube-api-access-ghqg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.414509 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02d3bec2-0500-4cf0-bd86-afb60afff196" (UID: "02d3bec2-0500-4cf0-bd86-afb60afff196"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.439423 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghqg8\" (UniqueName: \"kubernetes.io/projected/02d3bec2-0500-4cf0-bd86-afb60afff196-kube-api-access-ghqg8\") on node \"crc\" DevicePath \"\"" Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.439455 4699 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.439465 4699 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d3bec2-0500-4cf0-bd86-afb60afff196-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.711053 4699 generic.go:334] "Generic (PLEG): container finished" podID="02d3bec2-0500-4cf0-bd86-afb60afff196" containerID="ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521" exitCode=0 Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.711149 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt4jp" event={"ID":"02d3bec2-0500-4cf0-bd86-afb60afff196","Type":"ContainerDied","Data":"ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521"} Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.711196 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt4jp" event={"ID":"02d3bec2-0500-4cf0-bd86-afb60afff196","Type":"ContainerDied","Data":"02a24e01b1c182ae9a9efe8461a684df616383c8ce03e663ec76416efe2f36ea"} Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.711209 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt4jp" Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.711231 4699 scope.go:117] "RemoveContainer" containerID="ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521" Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.733227 4699 scope.go:117] "RemoveContainer" containerID="596ca82299eb4b3c2c0c7cdfb8e6e760467fefb1fafd10f080a0a95e6093e4d7" Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.757512 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jt4jp"] Feb 26 12:27:55 crc kubenswrapper[4699]: I0226 12:27:55.766874 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jt4jp"] Feb 26 12:27:56 crc kubenswrapper[4699]: I0226 12:27:56.289248 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d3bec2-0500-4cf0-bd86-afb60afff196" path="/var/lib/kubelet/pods/02d3bec2-0500-4cf0-bd86-afb60afff196/volumes" Feb 26 12:27:56 crc kubenswrapper[4699]: I0226 12:27:56.339323 4699 scope.go:117] "RemoveContainer" containerID="b4022229d156fc5677e43d0ae8674a6a03ab663afab5557ed38024993deb55f7" Feb 26 12:27:56 crc kubenswrapper[4699]: I0226 12:27:56.426488 4699 scope.go:117] "RemoveContainer" containerID="ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521" Feb 26 12:27:56 crc kubenswrapper[4699]: E0226 12:27:56.429707 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521\": container with ID starting with ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521 not found: ID does not exist" containerID="ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521" Feb 26 12:27:56 crc kubenswrapper[4699]: I0226 12:27:56.429757 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521"} err="failed to get container status \"ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521\": rpc error: code = NotFound desc = could not find container \"ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521\": container with ID starting with ba7fc42bb71cb3014343c4f82cbede20f0bcd5139709fa5f64efb305f44bb521 not found: ID does not exist" Feb 26 12:27:56 crc kubenswrapper[4699]: I0226 12:27:56.429791 4699 scope.go:117] "RemoveContainer" containerID="596ca82299eb4b3c2c0c7cdfb8e6e760467fefb1fafd10f080a0a95e6093e4d7" Feb 26 12:27:56 crc kubenswrapper[4699]: E0226 12:27:56.430275 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"596ca82299eb4b3c2c0c7cdfb8e6e760467fefb1fafd10f080a0a95e6093e4d7\": container with ID starting with 596ca82299eb4b3c2c0c7cdfb8e6e760467fefb1fafd10f080a0a95e6093e4d7 not found: ID does not exist" containerID="596ca82299eb4b3c2c0c7cdfb8e6e760467fefb1fafd10f080a0a95e6093e4d7" Feb 26 12:27:56 crc kubenswrapper[4699]: I0226 12:27:56.430313 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596ca82299eb4b3c2c0c7cdfb8e6e760467fefb1fafd10f080a0a95e6093e4d7"} err="failed to get container status \"596ca82299eb4b3c2c0c7cdfb8e6e760467fefb1fafd10f080a0a95e6093e4d7\": rpc error: code = NotFound desc = could not find container \"596ca82299eb4b3c2c0c7cdfb8e6e760467fefb1fafd10f080a0a95e6093e4d7\": container with ID starting with 596ca82299eb4b3c2c0c7cdfb8e6e760467fefb1fafd10f080a0a95e6093e4d7 not found: ID does not exist" Feb 26 12:27:56 crc kubenswrapper[4699]: I0226 12:27:56.430334 4699 scope.go:117] "RemoveContainer" containerID="b4022229d156fc5677e43d0ae8674a6a03ab663afab5557ed38024993deb55f7" Feb 26 12:27:56 crc kubenswrapper[4699]: E0226 12:27:56.430689 4699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4022229d156fc5677e43d0ae8674a6a03ab663afab5557ed38024993deb55f7\": container with ID starting with b4022229d156fc5677e43d0ae8674a6a03ab663afab5557ed38024993deb55f7 not found: ID does not exist" containerID="b4022229d156fc5677e43d0ae8674a6a03ab663afab5557ed38024993deb55f7" Feb 26 12:27:56 crc kubenswrapper[4699]: I0226 12:27:56.430721 4699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4022229d156fc5677e43d0ae8674a6a03ab663afab5557ed38024993deb55f7"} err="failed to get container status \"b4022229d156fc5677e43d0ae8674a6a03ab663afab5557ed38024993deb55f7\": rpc error: code = NotFound desc = could not find container \"b4022229d156fc5677e43d0ae8674a6a03ab663afab5557ed38024993deb55f7\": container with ID starting with b4022229d156fc5677e43d0ae8674a6a03ab663afab5557ed38024993deb55f7 not found: ID does not exist" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.158054 4699 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535148-xbtxt"] Feb 26 12:28:00 crc kubenswrapper[4699]: E0226 12:28:00.159255 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaaa487f-21d2-470a-9bce-914a42da0710" containerName="registry-server" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.159276 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaaa487f-21d2-470a-9bce-914a42da0710" containerName="registry-server" Feb 26 12:28:00 crc kubenswrapper[4699]: E0226 12:28:00.159293 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaaa487f-21d2-470a-9bce-914a42da0710" containerName="extract-utilities" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.159300 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaaa487f-21d2-470a-9bce-914a42da0710" containerName="extract-utilities" Feb 26 12:28:00 crc kubenswrapper[4699]: E0226 12:28:00.159315 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d3bec2-0500-4cf0-bd86-afb60afff196" containerName="registry-server" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.159324 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d3bec2-0500-4cf0-bd86-afb60afff196" containerName="registry-server" Feb 26 12:28:00 crc kubenswrapper[4699]: E0226 12:28:00.159373 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaaa487f-21d2-470a-9bce-914a42da0710" containerName="extract-content" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.159384 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaaa487f-21d2-470a-9bce-914a42da0710" containerName="extract-content" Feb 26 12:28:00 crc kubenswrapper[4699]: E0226 12:28:00.161406 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d3bec2-0500-4cf0-bd86-afb60afff196" containerName="extract-utilities" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.161425 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d3bec2-0500-4cf0-bd86-afb60afff196" containerName="extract-utilities" Feb 26 12:28:00 crc kubenswrapper[4699]: E0226 12:28:00.161445 4699 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d3bec2-0500-4cf0-bd86-afb60afff196" containerName="extract-content" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.161452 4699 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d3bec2-0500-4cf0-bd86-afb60afff196" containerName="extract-content" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.161796 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d3bec2-0500-4cf0-bd86-afb60afff196" containerName="registry-server" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.161820 4699 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaaa487f-21d2-470a-9bce-914a42da0710" containerName="registry-server" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.163386 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535148-xbtxt" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.168076 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.172088 4699 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.172681 4699 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-9855d" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.185610 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535148-xbtxt"] Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.239894 4699 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-992z2\" (UniqueName: \"kubernetes.io/projected/c75250fb-35c3-4966-a995-33aaa68ec5e9-kube-api-access-992z2\") pod \"auto-csr-approver-29535148-xbtxt\" (UID: \"c75250fb-35c3-4966-a995-33aaa68ec5e9\") " pod="openshift-infra/auto-csr-approver-29535148-xbtxt" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.261010 4699 scope.go:117] "RemoveContainer" containerID="db77674b6c4932546ef12383ba478551ccaf182e6ce1c41c613b6bd49208df30" Feb 26 12:28:00 crc kubenswrapper[4699]: E0226 12:28:00.261332 4699 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-28p79_openshift-machine-config-operator(e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff)\"" pod="openshift-machine-config-operator/machine-config-daemon-28p79" podUID="e2a8e4ba-2bc7-4e9e-b7b9-dfce00cecdff" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.341095 4699 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-992z2\" (UniqueName: \"kubernetes.io/projected/c75250fb-35c3-4966-a995-33aaa68ec5e9-kube-api-access-992z2\") pod \"auto-csr-approver-29535148-xbtxt\" (UID: \"c75250fb-35c3-4966-a995-33aaa68ec5e9\") " pod="openshift-infra/auto-csr-approver-29535148-xbtxt" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.360564 4699 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-992z2\" (UniqueName: \"kubernetes.io/projected/c75250fb-35c3-4966-a995-33aaa68ec5e9-kube-api-access-992z2\") pod \"auto-csr-approver-29535148-xbtxt\" (UID: \"c75250fb-35c3-4966-a995-33aaa68ec5e9\") " pod="openshift-infra/auto-csr-approver-29535148-xbtxt" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.486091 4699 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535148-xbtxt" Feb 26 12:28:00 crc kubenswrapper[4699]: I0226 12:28:00.933441 4699 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535148-xbtxt"] Feb 26 12:28:01 crc kubenswrapper[4699]: I0226 12:28:01.773046 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535148-xbtxt" event={"ID":"c75250fb-35c3-4966-a995-33aaa68ec5e9","Type":"ContainerStarted","Data":"f300627272e8f7839b6085e86065bce05f2918a517a3d28a295e941c8f546eed"} Feb 26 12:28:02 crc kubenswrapper[4699]: I0226 12:28:02.782731 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535148-xbtxt" event={"ID":"c75250fb-35c3-4966-a995-33aaa68ec5e9","Type":"ContainerStarted","Data":"802b2f4b84123adda32c52f3d96782204f705a61314b180c33cfe926c167eee1"} Feb 26 12:28:03 crc kubenswrapper[4699]: I0226 12:28:03.791104 4699 generic.go:334] "Generic (PLEG): container finished" podID="c75250fb-35c3-4966-a995-33aaa68ec5e9" containerID="802b2f4b84123adda32c52f3d96782204f705a61314b180c33cfe926c167eee1" exitCode=0 Feb 26 12:28:03 crc kubenswrapper[4699]: I0226 12:28:03.791370 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535148-xbtxt" event={"ID":"c75250fb-35c3-4966-a995-33aaa68ec5e9","Type":"ContainerDied","Data":"802b2f4b84123adda32c52f3d96782204f705a61314b180c33cfe926c167eee1"} Feb 26 12:28:05 crc kubenswrapper[4699]: I0226 12:28:05.216662 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535148-xbtxt" Feb 26 12:28:05 crc kubenswrapper[4699]: I0226 12:28:05.352907 4699 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-992z2\" (UniqueName: \"kubernetes.io/projected/c75250fb-35c3-4966-a995-33aaa68ec5e9-kube-api-access-992z2\") pod \"c75250fb-35c3-4966-a995-33aaa68ec5e9\" (UID: \"c75250fb-35c3-4966-a995-33aaa68ec5e9\") " Feb 26 12:28:05 crc kubenswrapper[4699]: I0226 12:28:05.366414 4699 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c75250fb-35c3-4966-a995-33aaa68ec5e9-kube-api-access-992z2" (OuterVolumeSpecName: "kube-api-access-992z2") pod "c75250fb-35c3-4966-a995-33aaa68ec5e9" (UID: "c75250fb-35c3-4966-a995-33aaa68ec5e9"). InnerVolumeSpecName "kube-api-access-992z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:28:05 crc kubenswrapper[4699]: I0226 12:28:05.455717 4699 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-992z2\" (UniqueName: \"kubernetes.io/projected/c75250fb-35c3-4966-a995-33aaa68ec5e9-kube-api-access-992z2\") on node \"crc\" DevicePath \"\"" Feb 26 12:28:05 crc kubenswrapper[4699]: I0226 12:28:05.812951 4699 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535148-xbtxt" event={"ID":"c75250fb-35c3-4966-a995-33aaa68ec5e9","Type":"ContainerDied","Data":"f300627272e8f7839b6085e86065bce05f2918a517a3d28a295e941c8f546eed"} Feb 26 12:28:05 crc kubenswrapper[4699]: I0226 12:28:05.812996 4699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f300627272e8f7839b6085e86065bce05f2918a517a3d28a295e941c8f546eed" Feb 26 12:28:05 crc kubenswrapper[4699]: I0226 12:28:05.813062 4699 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535148-xbtxt" Feb 26 12:28:05 crc kubenswrapper[4699]: I0226 12:28:05.867546 4699 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535142-vfnhz"] Feb 26 12:28:05 crc kubenswrapper[4699]: I0226 12:28:05.877990 4699 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535142-vfnhz"] Feb 26 12:28:06 crc kubenswrapper[4699]: I0226 12:28:06.275437 4699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a1ba6a1-6a82-47c4-9706-f77275f34d3a" path="/var/lib/kubelet/pods/8a1ba6a1-6a82-47c4-9706-f77275f34d3a/volumes"